00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3925 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3520 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.105 Fetching changes from the remote Git repository 00:00:00.107 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.236 > git --version # 'git version 2.39.2' 00:00:00.236 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.290 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.290 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.270 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.283 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.295 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.295 > git config core.sparsecheckout # timeout=10 00:00:05.307 > git read-tree -mu HEAD # timeout=10 00:00:05.323 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.340 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.340 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.449 [Pipeline] Start of Pipeline 00:00:05.459 [Pipeline] library 00:00:05.460 Loading library shm_lib@master 00:00:05.460 Library shm_lib@master is cached. Copying from home. 00:00:05.473 [Pipeline] node 00:00:05.481 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.482 [Pipeline] { 00:00:05.491 [Pipeline] catchError 00:00:05.492 [Pipeline] { 00:00:05.504 [Pipeline] wrap 00:00:05.513 [Pipeline] { 00:00:05.519 [Pipeline] stage 00:00:05.522 [Pipeline] { (Prologue) 00:00:05.735 [Pipeline] sh 00:00:06.021 + logger -p user.info -t JENKINS-CI 00:00:06.038 [Pipeline] echo 00:00:06.039 Node: CYP12 00:00:06.044 [Pipeline] sh 00:00:06.343 [Pipeline] setCustomBuildProperty 00:00:06.356 [Pipeline] echo 00:00:06.358 Cleanup processes 00:00:06.363 [Pipeline] sh 00:00:06.650 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.650 1489777 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.664 [Pipeline] sh 00:00:06.956 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.956 ++ grep -v 'sudo pgrep' 00:00:06.956 ++ awk '{print $1}' 00:00:06.956 + sudo kill -9 00:00:06.956 + true 00:00:06.972 [Pipeline] cleanWs 00:00:06.982 [WS-CLEANUP] Deleting project workspace... 00:00:06.982 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.990 [WS-CLEANUP] done 00:00:06.993 [Pipeline] setCustomBuildProperty 00:00:07.008 [Pipeline] sh 00:00:07.292 + sudo git config --global --replace-all safe.directory '*' 00:00:07.380 [Pipeline] httpRequest 00:00:07.785 [Pipeline] echo 00:00:07.786 Sorcerer 10.211.164.101 is alive 00:00:07.793 [Pipeline] retry 00:00:07.796 [Pipeline] { 00:00:07.813 [Pipeline] httpRequest 00:00:07.818 HttpMethod: GET 00:00:07.818 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.819 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.823 Response Code: HTTP/1.1 200 OK 00:00:07.823 Success: Status code 200 is in the accepted range: 200,404 00:00:07.823 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.256 [Pipeline] } 00:00:08.272 [Pipeline] // retry 00:00:08.278 [Pipeline] sh 00:00:08.561 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.575 [Pipeline] httpRequest 00:00:08.998 [Pipeline] echo 00:00:09.000 Sorcerer 10.211.164.101 is alive 00:00:09.008 [Pipeline] retry 00:00:09.010 [Pipeline] { 00:00:09.022 [Pipeline] httpRequest 00:00:09.026 HttpMethod: GET 00:00:09.026 URL: http://10.211.164.101/packages/spdk_a29d7fdf9ba2d019916288e092b6be04c4ec2aa3.tar.gz 00:00:09.028 Sending request to url: http://10.211.164.101/packages/spdk_a29d7fdf9ba2d019916288e092b6be04c4ec2aa3.tar.gz 00:00:09.032 Response Code: HTTP/1.1 200 OK 00:00:09.032 Success: Status code 200 is in the accepted range: 200,404 00:00:09.032 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a29d7fdf9ba2d019916288e092b6be04c4ec2aa3.tar.gz 00:00:25.391 [Pipeline] } 00:00:25.408 [Pipeline] // retry 00:00:25.415 [Pipeline] sh 00:00:25.703 + tar --no-same-owner -xf spdk_a29d7fdf9ba2d019916288e092b6be04c4ec2aa3.tar.gz 00:00:29.013 [Pipeline] sh 00:00:29.302 + git -C spdk log --oneline -n5 00:00:29.302 a29d7fdf9 fsdev/aio: aio_io_poll: correct return value 00:00:29.302 a711f4452 test/vhost: Attempt to verify vhost status upon termination 00:00:29.302 f37d64d6d event: don't print core unlock warnings if cores were never locked 00:00:29.302 37b3b045c event: help user if env initialization fails as non-root 00:00:29.302 2e29543d1 event: move function declarations to inside of extern "C" guard 00:00:29.322 [Pipeline] withCredentials 00:00:29.334 > git --version # timeout=10 00:00:29.349 > git --version # 'git version 2.39.2' 00:00:29.373 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:29.375 [Pipeline] { 00:00:29.385 [Pipeline] retry 00:00:29.387 [Pipeline] { 00:00:29.403 [Pipeline] sh 00:00:29.916 + git ls-remote http://dpdk.org/git/dpdk main 00:00:30.190 [Pipeline] } 00:00:30.212 [Pipeline] // retry 00:00:30.218 [Pipeline] } 00:00:30.236 [Pipeline] // withCredentials 00:00:30.247 [Pipeline] httpRequest 00:00:30.660 [Pipeline] echo 00:00:30.663 Sorcerer 10.211.164.101 is alive 00:00:30.674 [Pipeline] retry 00:00:30.676 [Pipeline] { 00:00:30.691 [Pipeline] httpRequest 00:00:30.696 HttpMethod: GET 00:00:30.697 URL: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:30.698 Sending request to url: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:30.722 Response Code: HTTP/1.1 200 OK 00:00:30.722 Success: Status code 200 is in the accepted range: 200,404 00:00:30.723 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:16.354 [Pipeline] } 00:01:16.371 [Pipeline] // retry 00:01:16.380 [Pipeline] sh 00:01:16.678 + tar --no-same-owner -xf dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:18.084 [Pipeline] sh 00:01:18.373 + git -C dpdk log --oneline -n5 00:01:18.373 e7bc451c99 trace: disable traces at compilation 00:01:18.373 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:01:18.373 7268f21aa0 timer: improve TSC estimation accuracy 00:01:18.373 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:01:18.373 41b09d64e3 eal/x86: fix 32-bit write combining store 00:01:18.384 [Pipeline] } 00:01:18.397 [Pipeline] // stage 00:01:18.406 [Pipeline] stage 00:01:18.408 [Pipeline] { (Prepare) 00:01:18.427 [Pipeline] writeFile 00:01:18.442 [Pipeline] sh 00:01:18.731 + logger -p user.info -t JENKINS-CI 00:01:18.744 [Pipeline] sh 00:01:19.031 + logger -p user.info -t JENKINS-CI 00:01:19.045 [Pipeline] sh 00:01:19.333 + cat autorun-spdk.conf 00:01:19.334 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.334 SPDK_TEST_NVMF=1 00:01:19.334 SPDK_TEST_NVME_CLI=1 00:01:19.334 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.334 SPDK_TEST_NVMF_NICS=e810 00:01:19.334 SPDK_TEST_VFIOUSER=1 00:01:19.334 SPDK_RUN_UBSAN=1 00:01:19.334 NET_TYPE=phy 00:01:19.334 SPDK_TEST_NATIVE_DPDK=main 00:01:19.334 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.342 RUN_NIGHTLY=1 00:01:19.346 [Pipeline] readFile 00:01:19.369 [Pipeline] withEnv 00:01:19.371 [Pipeline] { 00:01:19.383 [Pipeline] sh 00:01:19.673 + set -ex 00:01:19.673 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.673 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.673 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.673 ++ SPDK_TEST_NVMF=1 00:01:19.673 ++ SPDK_TEST_NVME_CLI=1 00:01:19.673 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.673 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.673 ++ SPDK_TEST_VFIOUSER=1 00:01:19.673 ++ SPDK_RUN_UBSAN=1 00:01:19.673 ++ NET_TYPE=phy 00:01:19.673 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:19.673 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:19.673 ++ RUN_NIGHTLY=1 00:01:19.673 + case $SPDK_TEST_NVMF_NICS in 00:01:19.673 + DRIVERS=ice 00:01:19.673 + [[ tcp == \r\d\m\a ]] 00:01:19.673 + [[ -n ice ]] 00:01:19.673 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.673 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.673 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.673 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.673 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.673 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.673 + true 00:01:19.673 + for D in $DRIVERS 00:01:19.673 + sudo modprobe ice 00:01:19.673 + exit 0 00:01:19.683 [Pipeline] } 00:01:19.699 [Pipeline] // withEnv 00:01:19.704 [Pipeline] } 00:01:19.717 [Pipeline] // stage 00:01:19.726 [Pipeline] catchError 00:01:19.728 [Pipeline] { 00:01:19.739 [Pipeline] timeout 00:01:19.739 Timeout set to expire in 1 hr 0 min 00:01:19.741 [Pipeline] { 00:01:19.754 [Pipeline] stage 00:01:19.757 [Pipeline] { (Tests) 00:01:19.770 [Pipeline] sh 00:01:20.057 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.057 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.057 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.057 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.057 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.057 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.057 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.057 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.057 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.057 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.057 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.057 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.057 + source /etc/os-release 00:01:20.057 ++ NAME='Fedora Linux' 00:01:20.057 ++ VERSION='39 (Cloud Edition)' 00:01:20.057 ++ ID=fedora 00:01:20.057 ++ VERSION_ID=39 00:01:20.057 ++ VERSION_CODENAME= 00:01:20.057 ++ PLATFORM_ID=platform:f39 00:01:20.057 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.057 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.057 ++ LOGO=fedora-logo-icon 00:01:20.057 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.057 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.057 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.057 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.057 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.057 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.057 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.057 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.057 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.057 ++ SUPPORT_END=2024-11-12 00:01:20.057 ++ VARIANT='Cloud Edition' 00:01:20.057 ++ VARIANT_ID=cloud 00:01:20.057 + uname -a 00:01:20.057 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.057 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.359 Hugepages 00:01:23.359 node hugesize free / total 00:01:23.359 node0 1048576kB 0 / 0 00:01:23.359 node0 2048kB 0 / 0 00:01:23.359 node1 1048576kB 0 / 0 00:01:23.359 node1 2048kB 0 / 0 00:01:23.359 00:01:23.359 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.359 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:23.359 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:23.359 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:23.359 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:23.359 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:23.359 + rm -f /tmp/spdk-ld-path 00:01:23.359 + source autorun-spdk.conf 00:01:23.359 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.359 ++ SPDK_TEST_NVMF=1 00:01:23.359 ++ SPDK_TEST_NVME_CLI=1 00:01:23.359 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.359 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.359 ++ SPDK_TEST_VFIOUSER=1 00:01:23.359 ++ SPDK_RUN_UBSAN=1 00:01:23.359 ++ NET_TYPE=phy 00:01:23.359 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:23.359 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.359 ++ RUN_NIGHTLY=1 00:01:23.359 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.359 + [[ -n '' ]] 00:01:23.359 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.359 + for M in /var/spdk/build-*-manifest.txt 00:01:23.359 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.359 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.359 + for M in /var/spdk/build-*-manifest.txt 00:01:23.359 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.359 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.359 + for M in /var/spdk/build-*-manifest.txt 00:01:23.359 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.359 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.359 ++ uname 00:01:23.359 + [[ Linux == \L\i\n\u\x ]] 00:01:23.359 + sudo dmesg -T 00:01:23.359 + sudo dmesg --clear 00:01:23.359 + dmesg_pid=1490832 00:01:23.359 + [[ Fedora Linux == FreeBSD ]] 00:01:23.359 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.359 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.359 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.359 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.359 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.359 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.359 + sudo dmesg -Tw 00:01:23.359 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.359 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.359 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.359 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.359 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.359 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.359 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.359 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.359 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.359 Test configuration: 00:01:23.359 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.359 SPDK_TEST_NVMF=1 00:01:23.359 SPDK_TEST_NVME_CLI=1 00:01:23.359 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.359 SPDK_TEST_NVMF_NICS=e810 00:01:23.359 SPDK_TEST_VFIOUSER=1 00:01:23.359 SPDK_RUN_UBSAN=1 00:01:23.359 NET_TYPE=phy 00:01:23.359 SPDK_TEST_NATIVE_DPDK=main 00:01:23.359 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.359 RUN_NIGHTLY=1 10:41:43 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:23.359 10:41:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.359 10:41:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.360 10:41:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.360 10:41:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.360 10:41:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.360 10:41:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.360 10:41:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.360 10:41:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.360 10:41:43 -- paths/export.sh@5 -- $ export PATH 00:01:23.360 10:41:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.360 10:41:43 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.360 10:41:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:23.360 10:41:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728463303.XXXXXX 00:01:23.360 10:41:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728463303.UzFw03 00:01:23.360 10:41:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:23.360 10:41:43 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:01:23.360 10:41:43 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.360 10:41:43 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:23.360 10:41:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.360 10:41:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.360 10:41:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:23.360 10:41:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:23.360 10:41:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.360 10:41:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:23.360 10:41:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:23.360 10:41:43 -- pm/common@17 -- $ local monitor 00:01:23.360 10:41:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.360 10:41:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.360 10:41:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.360 10:41:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.360 10:41:43 -- pm/common@21 -- $ date +%s 00:01:23.360 10:41:43 -- pm/common@25 -- $ sleep 1 00:01:23.360 10:41:43 -- pm/common@21 -- $ date +%s 00:01:23.360 10:41:43 -- pm/common@21 -- $ date +%s 00:01:23.360 10:41:43 -- pm/common@21 -- $ date +%s 00:01:23.360 10:41:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728463303 00:01:23.360 10:41:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728463303 00:01:23.360 10:41:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728463303 00:01:23.360 10:41:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728463303 00:01:23.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728463303_collect-cpu-load.pm.log 00:01:23.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728463303_collect-vmstat.pm.log 00:01:23.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728463303_collect-cpu-temp.pm.log 00:01:23.360 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728463303_collect-bmc-pm.bmc.pm.log 00:01:24.303 10:41:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:24.303 10:41:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.303 10:41:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.303 10:41:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.303 10:41:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.303 Wed Oct 9 08:41:44 AM UTC 2024 00:01:24.303 10:41:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.303 v25.01-pre-48-ga29d7fdf9 00:01:24.303 10:41:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.303 10:41:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.303 10:41:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.303 10:41:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:24.303 10:41:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:24.303 10:41:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.303 ************************************ 00:01:24.303 START TEST ubsan 00:01:24.303 ************************************ 00:01:24.303 10:41:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:24.303 using ubsan 00:01:24.303 00:01:24.303 real 0m0.001s 00:01:24.303 user 0m0.000s 00:01:24.303 sys 0m0.000s 00:01:24.303 10:41:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:24.303 10:41:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.303 ************************************ 00:01:24.303 END TEST ubsan 00:01:24.303 ************************************ 00:01:24.564 10:41:44 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:24.564 10:41:44 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:24.564 10:41:44 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:24.564 10:41:44 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:24.564 10:41:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:24.564 10:41:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.564 ************************************ 00:01:24.564 START TEST build_native_dpdk 00:01:24.564 ************************************ 00:01:24.564 10:41:44 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:24.564 10:41:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:24.565 e7bc451c99 trace: disable traces at compilation 00:01:24.565 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:01:24.565 7268f21aa0 timer: improve TSC estimation accuracy 00:01:24.565 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:01:24.565 41b09d64e3 eal/x86: fix 32-bit write combining store 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:24.565 patching file config/rte_config.h 00:01:24.565 Hunk #1 succeeded at 71 (offset 12 lines). 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:24.565 10:41:44 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:24.565 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:24.566 10:41:44 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:24.566 patching file drivers/bus/pci/linux/pci_uio.c 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:24.566 10:41:44 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.855 The Meson build system 00:01:29.855 Version: 1.5.0 00:01:29.855 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.855 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:29.855 Build type: native build 00:01:29.855 Program cat found: YES (/usr/bin/cat) 00:01:29.855 Project name: DPDK 00:01:29.855 Project version: 24.11.0-rc0 00:01:29.855 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:29.855 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:29.855 Host machine cpu family: x86_64 00:01:29.855 Host machine cpu: x86_64 00:01:29.855 Message: ## Building in Developer Mode ## 00:01:29.855 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.855 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:29.855 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.855 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:29.855 Program cat found: YES (/usr/bin/cat) 00:01:29.855 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:29.855 Compiler for C supports arguments -march=native: YES 00:01:29.855 Checking for size of "void *" : 8 00:01:29.855 Checking for size of "void *" : 8 (cached) 00:01:29.855 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:29.855 Library m found: YES 00:01:29.855 Library numa found: YES 00:01:29.855 Has header "numaif.h" : YES 00:01:29.855 Library fdt found: NO 00:01:29.855 Library execinfo found: NO 00:01:29.855 Has header "execinfo.h" : YES 00:01:29.855 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:29.855 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.855 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.855 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.855 Run-time dependency openssl found: YES 3.1.1 00:01:29.855 Run-time dependency libpcap found: YES 1.10.4 00:01:29.855 Has header "pcap.h" with dependency libpcap: YES 00:01:29.855 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.855 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.855 Compiler for C supports arguments -Wformat: YES 00:01:29.855 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.855 Compiler for C supports arguments -Wformat-security: NO 00:01:29.855 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.855 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.855 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.855 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.855 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.855 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.855 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.855 Compiler for C supports arguments -Wundef: YES 00:01:29.855 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.855 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.855 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.855 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.855 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.855 Program objdump found: YES (/usr/bin/objdump) 00:01:29.855 Compiler for C supports arguments -mavx512f: YES 00:01:29.855 Checking if "AVX512 checking" compiles: YES 00:01:29.855 Fetching value of define "__SSE4_2__" : 1 00:01:29.855 Fetching value of define "__AES__" : 1 00:01:29.855 Fetching value of define "__AVX__" : 1 00:01:29.855 Fetching value of define "__AVX2__" : 1 00:01:29.855 Fetching value of define "__AVX512BW__" : 1 00:01:29.855 Fetching value of define "__AVX512CD__" : 1 00:01:29.855 Fetching value of define "__AVX512DQ__" : 1 00:01:29.855 Fetching value of define "__AVX512F__" : 1 00:01:29.855 Fetching value of define "__AVX512VL__" : 1 00:01:29.855 Fetching value of define "__PCLMUL__" : 1 00:01:29.855 Fetching value of define "__RDRND__" : 1 00:01:29.855 Fetching value of define "__RDSEED__" : 1 00:01:29.855 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:29.855 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.855 Message: lib/log: Defining dependency "log" 00:01:29.855 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.855 Message: lib/argparse: Defining dependency "argparse" 00:01:29.855 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.855 Checking for function "getentropy" : NO 00:01:29.855 Message: lib/eal: Defining dependency "eal" 00:01:29.855 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:29.855 Message: lib/ring: Defining dependency "ring" 00:01:29.855 Message: lib/rcu: Defining dependency "rcu" 00:01:29.855 Message: lib/mempool: Defining dependency "mempool" 00:01:29.855 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.855 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.855 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.855 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.855 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.855 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:29.855 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:29.855 Compiler for C supports arguments -mpclmul: YES 00:01:29.855 Compiler for C supports arguments -maes: YES 00:01:29.855 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.855 Compiler for C supports arguments -mavx512bw: YES 00:01:29.855 Compiler for C supports arguments -mavx512dq: YES 00:01:29.855 Compiler for C supports arguments -mavx512vl: YES 00:01:29.855 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.855 Compiler for C supports arguments -mavx2: YES 00:01:29.855 Compiler for C supports arguments -mavx: YES 00:01:29.855 Message: lib/net: Defining dependency "net" 00:01:29.855 Message: lib/meter: Defining dependency "meter" 00:01:29.855 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.855 Message: lib/pci: Defining dependency "pci" 00:01:29.855 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.855 Message: lib/metrics: Defining dependency "metrics" 00:01:29.856 Message: lib/hash: Defining dependency "hash" 00:01:29.856 Message: lib/timer: Defining dependency "timer" 00:01:29.856 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.856 Message: lib/acl: Defining dependency "acl" 00:01:29.856 Message: lib/bbdev: Defining dependency "bbdev" 00:01:29.856 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:29.856 Run-time dependency libelf found: YES 0.191 00:01:29.856 Message: lib/bpf: Defining dependency "bpf" 00:01:29.856 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:29.856 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.856 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.856 Message: lib/distributor: Defining dependency "distributor" 00:01:29.856 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.856 Message: lib/efd: Defining dependency "efd" 00:01:29.856 Message: lib/eventdev: Defining dependency "eventdev" 00:01:29.856 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:29.856 Message: lib/gpudev: Defining dependency "gpudev" 00:01:29.856 Message: lib/gro: Defining dependency "gro" 00:01:29.856 Message: lib/gso: Defining dependency "gso" 00:01:29.856 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:29.856 Message: lib/jobstats: Defining dependency "jobstats" 00:01:29.856 Message: lib/latencystats: Defining dependency "latencystats" 00:01:29.856 Message: lib/lpm: Defining dependency "lpm" 00:01:29.856 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512IFMA__" : 1 00:01:29.856 Message: lib/member: Defining dependency "member" 00:01:29.856 Message: lib/pcapng: Defining dependency "pcapng" 00:01:29.856 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.856 Message: lib/power: Defining dependency "power" 00:01:29.856 Message: lib/rawdev: Defining dependency "rawdev" 00:01:29.856 Message: lib/regexdev: Defining dependency "regexdev" 00:01:29.856 Message: lib/mldev: Defining dependency "mldev" 00:01:29.856 Message: lib/rib: Defining dependency "rib" 00:01:29.856 Message: lib/reorder: Defining dependency "reorder" 00:01:29.856 Message: lib/sched: Defining dependency "sched" 00:01:29.856 Message: lib/security: Defining dependency "security" 00:01:29.856 Message: lib/stack: Defining dependency "stack" 00:01:29.856 Has header "linux/userfaultfd.h" : YES 00:01:29.856 Has header "linux/vduse.h" : YES 00:01:29.856 Message: lib/vhost: Defining dependency "vhost" 00:01:29.856 Message: lib/ipsec: Defining dependency "ipsec" 00:01:29.856 Message: lib/pdcp: Defining dependency "pdcp" 00:01:29.856 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.856 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.856 Message: lib/fib: Defining dependency "fib" 00:01:29.856 Message: lib/port: Defining dependency "port" 00:01:29.856 Message: lib/pdump: Defining dependency "pdump" 00:01:29.856 Message: lib/table: Defining dependency "table" 00:01:29.856 Message: lib/pipeline: Defining dependency "pipeline" 00:01:29.856 Message: lib/graph: Defining dependency "graph" 00:01:29.856 Message: lib/node: Defining dependency "node" 00:01:29.856 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.856 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.856 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.241 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.241 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:31.241 Compiler for C supports arguments -Wno-unused-value: YES 00:01:31.241 Compiler for C supports arguments -Wno-format: YES 00:01:31.241 Compiler for C supports arguments -Wno-format-security: YES 00:01:31.241 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:31.241 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:31.241 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:31.241 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:31.241 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:31.241 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:31.241 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.241 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:31.241 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:31.241 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:31.241 Has header "sys/epoll.h" : YES 00:01:31.241 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:31.241 Configuring doxy-api-html.conf using configuration 00:01:31.241 Configuring doxy-api-man.conf using configuration 00:01:31.241 Program mandb found: YES (/usr/bin/mandb) 00:01:31.241 Program sphinx-build found: NO 00:01:31.241 Configuring rte_build_config.h using configuration 00:01:31.241 Message: 00:01:31.241 ================= 00:01:31.241 Applications Enabled 00:01:31.241 ================= 00:01:31.241 00:01:31.241 apps: 00:01:31.241 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:31.241 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:31.241 test-pmd, test-regex, test-sad, test-security-perf, 00:01:31.241 00:01:31.241 Message: 00:01:31.241 ================= 00:01:31.241 Libraries Enabled 00:01:31.241 ================= 00:01:31.241 00:01:31.241 libs: 00:01:31.241 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:31.241 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:31.241 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:31.241 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:31.241 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:31.241 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:31.241 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:31.241 graph, node, 00:01:31.241 00:01:31.241 Message: 00:01:31.241 =============== 00:01:31.241 Drivers Enabled 00:01:31.241 =============== 00:01:31.241 00:01:31.241 common: 00:01:31.241 00:01:31.241 bus: 00:01:31.241 pci, vdev, 00:01:31.241 mempool: 00:01:31.241 ring, 00:01:31.241 dma: 00:01:31.241 00:01:31.241 net: 00:01:31.241 i40e, 00:01:31.241 raw: 00:01:31.241 00:01:31.241 crypto: 00:01:31.241 00:01:31.241 compress: 00:01:31.241 00:01:31.241 regex: 00:01:31.241 00:01:31.241 ml: 00:01:31.241 00:01:31.241 vdpa: 00:01:31.241 00:01:31.241 event: 00:01:31.241 00:01:31.241 baseband: 00:01:31.241 00:01:31.241 gpu: 00:01:31.241 00:01:31.241 00:01:31.241 Message: 00:01:31.241 ================= 00:01:31.241 Content Skipped 00:01:31.241 ================= 00:01:31.241 00:01:31.241 apps: 00:01:31.241 00:01:31.241 libs: 00:01:31.241 00:01:31.241 drivers: 00:01:31.241 common/cpt: not in enabled drivers build config 00:01:31.241 common/dpaax: not in enabled drivers build config 00:01:31.241 common/iavf: not in enabled drivers build config 00:01:31.241 common/idpf: not in enabled drivers build config 00:01:31.241 common/ionic: not in enabled drivers build config 00:01:31.241 common/mvep: not in enabled drivers build config 00:01:31.241 common/octeontx: not in enabled drivers build config 00:01:31.241 bus/auxiliary: not in enabled drivers build config 00:01:31.241 bus/cdx: not in enabled drivers build config 00:01:31.241 bus/dpaa: not in enabled drivers build config 00:01:31.241 bus/fslmc: not in enabled drivers build config 00:01:31.241 bus/ifpga: not in enabled drivers build config 00:01:31.241 bus/platform: not in enabled drivers build config 00:01:31.241 bus/uacce: not in enabled drivers build config 00:01:31.241 bus/vmbus: not in enabled drivers build config 00:01:31.241 common/cnxk: not in enabled drivers build config 00:01:31.241 common/mlx5: not in enabled drivers build config 00:01:31.241 common/nfp: not in enabled drivers build config 00:01:31.241 common/nitrox: not in enabled drivers build config 00:01:31.241 common/qat: not in enabled drivers build config 00:01:31.241 common/sfc_efx: not in enabled drivers build config 00:01:31.241 mempool/bucket: not in enabled drivers build config 00:01:31.242 mempool/cnxk: not in enabled drivers build config 00:01:31.242 mempool/dpaa: not in enabled drivers build config 00:01:31.242 mempool/dpaa2: not in enabled drivers build config 00:01:31.242 mempool/octeontx: not in enabled drivers build config 00:01:31.242 mempool/stack: not in enabled drivers build config 00:01:31.242 dma/cnxk: not in enabled drivers build config 00:01:31.242 dma/dpaa: not in enabled drivers build config 00:01:31.242 dma/dpaa2: not in enabled drivers build config 00:01:31.242 dma/hisilicon: not in enabled drivers build config 00:01:31.242 dma/idxd: not in enabled drivers build config 00:01:31.242 dma/ioat: not in enabled drivers build config 00:01:31.242 dma/odm: not in enabled drivers build config 00:01:31.242 dma/skeleton: not in enabled drivers build config 00:01:31.242 net/af_packet: not in enabled drivers build config 00:01:31.242 net/af_xdp: not in enabled drivers build config 00:01:31.242 net/ark: not in enabled drivers build config 00:01:31.242 net/atlantic: not in enabled drivers build config 00:01:31.242 net/avp: not in enabled drivers build config 00:01:31.242 net/axgbe: not in enabled drivers build config 00:01:31.242 net/bnx2x: not in enabled drivers build config 00:01:31.242 net/bnxt: not in enabled drivers build config 00:01:31.242 net/bonding: not in enabled drivers build config 00:01:31.242 net/cnxk: not in enabled drivers build config 00:01:31.242 net/cpfl: not in enabled drivers build config 00:01:31.242 net/cxgbe: not in enabled drivers build config 00:01:31.242 net/dpaa: not in enabled drivers build config 00:01:31.242 net/dpaa2: not in enabled drivers build config 00:01:31.242 net/e1000: not in enabled drivers build config 00:01:31.242 net/ena: not in enabled drivers build config 00:01:31.242 net/enetc: not in enabled drivers build config 00:01:31.242 net/enetfec: not in enabled drivers build config 00:01:31.242 net/enic: not in enabled drivers build config 00:01:31.242 net/failsafe: not in enabled drivers build config 00:01:31.242 net/fm10k: not in enabled drivers build config 00:01:31.242 net/gve: not in enabled drivers build config 00:01:31.242 net/hinic: not in enabled drivers build config 00:01:31.242 net/hns3: not in enabled drivers build config 00:01:31.242 net/iavf: not in enabled drivers build config 00:01:31.242 net/ice: not in enabled drivers build config 00:01:31.242 net/idpf: not in enabled drivers build config 00:01:31.242 net/igc: not in enabled drivers build config 00:01:31.242 net/ionic: not in enabled drivers build config 00:01:31.242 net/ipn3ke: not in enabled drivers build config 00:01:31.242 net/ixgbe: not in enabled drivers build config 00:01:31.242 net/mana: not in enabled drivers build config 00:01:31.242 net/memif: not in enabled drivers build config 00:01:31.242 net/mlx4: not in enabled drivers build config 00:01:31.242 net/mlx5: not in enabled drivers build config 00:01:31.242 net/mvneta: not in enabled drivers build config 00:01:31.242 net/mvpp2: not in enabled drivers build config 00:01:31.242 net/netvsc: not in enabled drivers build config 00:01:31.242 net/nfb: not in enabled drivers build config 00:01:31.242 net/nfp: not in enabled drivers build config 00:01:31.242 net/ngbe: not in enabled drivers build config 00:01:31.242 net/ntnic: not in enabled drivers build config 00:01:31.242 net/null: not in enabled drivers build config 00:01:31.242 net/octeontx: not in enabled drivers build config 00:01:31.242 net/octeon_ep: not in enabled drivers build config 00:01:31.242 net/pcap: not in enabled drivers build config 00:01:31.242 net/pfe: not in enabled drivers build config 00:01:31.242 net/qede: not in enabled drivers build config 00:01:31.242 net/ring: not in enabled drivers build config 00:01:31.242 net/sfc: not in enabled drivers build config 00:01:31.242 net/softnic: not in enabled drivers build config 00:01:31.242 net/tap: not in enabled drivers build config 00:01:31.242 net/thunderx: not in enabled drivers build config 00:01:31.242 net/txgbe: not in enabled drivers build config 00:01:31.242 net/vdev_netvsc: not in enabled drivers build config 00:01:31.242 net/vhost: not in enabled drivers build config 00:01:31.242 net/virtio: not in enabled drivers build config 00:01:31.242 net/vmxnet3: not in enabled drivers build config 00:01:31.242 raw/cnxk_bphy: not in enabled drivers build config 00:01:31.242 raw/cnxk_gpio: not in enabled drivers build config 00:01:31.242 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:31.242 raw/ifpga: not in enabled drivers build config 00:01:31.242 raw/ntb: not in enabled drivers build config 00:01:31.242 raw/skeleton: not in enabled drivers build config 00:01:31.242 crypto/armv8: not in enabled drivers build config 00:01:31.242 crypto/bcmfs: not in enabled drivers build config 00:01:31.242 crypto/caam_jr: not in enabled drivers build config 00:01:31.242 crypto/ccp: not in enabled drivers build config 00:01:31.242 crypto/cnxk: not in enabled drivers build config 00:01:31.242 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.242 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.242 crypto/ionic: not in enabled drivers build config 00:01:31.242 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.242 crypto/mlx5: not in enabled drivers build config 00:01:31.242 crypto/mvsam: not in enabled drivers build config 00:01:31.242 crypto/nitrox: not in enabled drivers build config 00:01:31.242 crypto/null: not in enabled drivers build config 00:01:31.242 crypto/octeontx: not in enabled drivers build config 00:01:31.242 crypto/openssl: not in enabled drivers build config 00:01:31.242 crypto/scheduler: not in enabled drivers build config 00:01:31.242 crypto/uadk: not in enabled drivers build config 00:01:31.242 crypto/virtio: not in enabled drivers build config 00:01:31.242 compress/isal: not in enabled drivers build config 00:01:31.242 compress/mlx5: not in enabled drivers build config 00:01:31.242 compress/nitrox: not in enabled drivers build config 00:01:31.242 compress/octeontx: not in enabled drivers build config 00:01:31.242 compress/uadk: not in enabled drivers build config 00:01:31.242 compress/zlib: not in enabled drivers build config 00:01:31.242 regex/mlx5: not in enabled drivers build config 00:01:31.242 regex/cn9k: not in enabled drivers build config 00:01:31.242 ml/cnxk: not in enabled drivers build config 00:01:31.242 vdpa/ifc: not in enabled drivers build config 00:01:31.242 vdpa/mlx5: not in enabled drivers build config 00:01:31.242 vdpa/nfp: not in enabled drivers build config 00:01:31.242 vdpa/sfc: not in enabled drivers build config 00:01:31.242 event/cnxk: not in enabled drivers build config 00:01:31.242 event/dlb2: not in enabled drivers build config 00:01:31.242 event/dpaa: not in enabled drivers build config 00:01:31.242 event/dpaa2: not in enabled drivers build config 00:01:31.242 event/dsw: not in enabled drivers build config 00:01:31.242 event/opdl: not in enabled drivers build config 00:01:31.242 event/skeleton: not in enabled drivers build config 00:01:31.242 event/sw: not in enabled drivers build config 00:01:31.242 event/octeontx: not in enabled drivers build config 00:01:31.242 baseband/acc: not in enabled drivers build config 00:01:31.242 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:31.242 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:31.242 baseband/la12xx: not in enabled drivers build config 00:01:31.242 baseband/null: not in enabled drivers build config 00:01:31.242 baseband/turbo_sw: not in enabled drivers build config 00:01:31.242 gpu/cuda: not in enabled drivers build config 00:01:31.242 00:01:31.242 00:01:31.242 Build targets in project: 219 00:01:31.242 00:01:31.242 DPDK 24.11.0-rc0 00:01:31.242 00:01:31.242 User defined options 00:01:31.242 libdir : lib 00:01:31.242 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.242 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:31.242 c_link_args : 00:01:31.242 enable_docs : false 00:01:31.242 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:31.242 enable_kmods : false 00:01:31.242 machine : native 00:01:31.242 tests : false 00:01:31.242 00:01:31.242 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.242 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:31.514 10:41:51 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:31.514 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:31.514 [1/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.514 [2/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.514 [3/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.515 [4/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.776 [5/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.776 [6/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.776 [7/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.776 [8/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.776 [9/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.776 [10/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.776 [11/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.776 [12/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.776 [13/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.776 [14/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.776 [15/718] Linking static target lib/librte_kvargs.a 00:01:31.776 [16/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.776 [17/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.776 [18/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.776 [19/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:31.776 [20/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.776 [21/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.776 [22/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.776 [23/718] Linking static target lib/librte_pci.a 00:01:31.776 [24/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.776 [25/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.034 [26/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.034 [27/718] Linking static target lib/librte_log.a 00:01:32.034 [28/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.034 [29/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:32.034 [30/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:32.034 [31/718] Linking static target lib/librte_argparse.a 00:01:32.034 [32/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.034 [33/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.034 [34/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.302 [35/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.302 [36/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.302 [37/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.302 [38/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.302 [39/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.302 [40/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.302 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.302 [42/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:32.302 [43/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:32.302 [44/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.302 [45/718] Linking static target lib/librte_cfgfile.a 00:01:32.302 [46/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.303 [47/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.303 [48/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.303 [49/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.303 [50/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.303 [51/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.303 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.303 [53/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.303 [54/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.303 [55/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.303 [56/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.303 [57/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.303 [58/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:32.303 [59/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.303 [60/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.303 [61/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.303 [62/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.303 [63/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.303 [64/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.303 [65/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.303 [66/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.303 [67/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.303 [68/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.303 [69/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.303 [70/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.303 [71/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.303 [72/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.303 [73/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.303 [74/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.303 [75/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:32.303 [76/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.303 [77/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.303 [78/718] Linking static target lib/librte_meter.a 00:01:32.303 [79/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.303 [80/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.303 [81/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:32.570 [82/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.570 [83/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.570 [84/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.570 [85/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.570 [86/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.570 [87/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:32.570 [88/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.570 [89/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.570 [90/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.570 [91/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.570 [92/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:32.570 [93/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:32.570 [94/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:32.570 [95/718] Linking static target lib/librte_cmdline.a 00:01:32.570 [96/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.570 [97/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.570 [98/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.570 [99/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:32.570 [100/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:32.570 [101/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.570 [102/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:32.570 [103/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.570 [104/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.570 [105/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.570 [106/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.571 [107/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.571 [108/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:32.571 [109/718] Linking static target lib/librte_metrics.a 00:01:32.571 [110/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:32.571 [111/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.571 [112/718] Linking static target lib/librte_ring.a 00:01:32.571 [113/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.571 [114/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.571 [115/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.571 [116/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.571 [117/718] Linking static target lib/librte_bitratestats.a 00:01:32.571 [118/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.571 [119/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:32.571 [120/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.571 [121/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.571 [122/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.571 [123/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.571 [124/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.571 [125/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.571 [126/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.571 [127/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.571 [128/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.571 [129/718] Linking static target lib/librte_net.a 00:01:32.571 [130/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.571 [131/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.571 [132/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:32.571 [133/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.831 [134/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:32.831 [135/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.831 [136/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.831 [137/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:32.831 [138/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.831 [139/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.831 [140/718] Linking target lib/librte_log.so.25.0 00:01:32.831 [141/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:32.831 [142/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:32.831 [143/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.831 [144/718] Linking static target lib/librte_compressdev.a 00:01:32.831 [145/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:32.831 [146/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:32.831 [147/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.831 [148/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:32.831 [149/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.831 [150/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.831 [151/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.831 [152/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.831 [153/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.831 [154/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.831 [155/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.831 [156/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:32.831 [157/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.831 [158/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.831 [159/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.831 [160/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.831 [161/718] Linking static target lib/librte_timer.a 00:01:32.831 [162/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:01:32.831 [163/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.091 [164/718] Linking target lib/librte_kvargs.so.25.0 00:01:33.091 [165/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:33.091 [166/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.091 [167/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:33.091 [168/718] Linking static target lib/librte_bbdev.a 00:01:33.091 [169/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:33.091 [170/718] Linking target lib/librte_argparse.so.25.0 00:01:33.091 [171/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:33.091 [172/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:33.091 [173/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:33.091 [174/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:33.091 [175/718] Linking static target lib/librte_jobstats.a 00:01:33.091 [176/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.091 [177/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.091 [178/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:33.092 [179/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.092 [180/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:33.092 [181/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:33.092 [182/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.092 [183/718] Linking static target lib/librte_mempool.a 00:01:33.092 [184/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:33.092 [185/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:33.092 [186/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:33.092 [187/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:33.092 [188/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:33.092 [189/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:33.092 [190/718] Linking static target lib/librte_dmadev.a 00:01:33.092 [191/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:33.092 [192/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:33.092 [193/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:33.092 [194/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:33.092 [195/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:33.092 [196/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:33.092 [197/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:33.092 [198/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:33.092 [199/718] Linking static target lib/librte_gpudev.a 00:01:33.092 [200/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:33.092 [201/718] Linking static target lib/librte_distributor.a 00:01:33.092 [202/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:33.092 [203/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:33.092 [204/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.092 [205/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:33.092 [206/718] Linking static target lib/librte_stack.a 00:01:33.092 [207/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:33.092 [208/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:33.092 [209/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:33.092 [210/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:33.092 [211/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:01:33.092 [212/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:33.092 [213/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:33.092 [214/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:33.092 [215/718] Linking static target lib/librte_dispatcher.a 00:01:33.351 [216/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:33.351 [217/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:33.351 [218/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:33.351 [219/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:33.351 [220/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:33.351 [221/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:33.351 [222/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:33.352 [223/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:33.352 [224/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:33.352 [225/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:33.352 [226/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:33.352 [227/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:33.352 [228/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:33.352 [229/718] Linking static target lib/librte_latencystats.a 00:01:33.352 [230/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:33.352 [231/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:33.352 [232/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:33.352 [233/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:33.352 [234/718] Linking static target lib/librte_mbuf.a 00:01:33.352 [235/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:33.352 [236/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:33.352 [237/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:33.352 [238/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:33.352 [239/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:33.352 [240/718] Linking static target lib/librte_gro.a 00:01:33.352 [241/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:33.352 [242/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:33.352 [243/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:33.352 [244/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:33.352 [245/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:33.352 [246/718] Linking static target lib/librte_regexdev.a 00:01:33.352 [247/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.352 [248/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:33.352 [249/718] Linking static target lib/librte_telemetry.a 00:01:33.352 [250/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:33.352 [251/718] Linking static target lib/librte_gso.a 00:01:33.352 [252/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:33.352 [253/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:33.352 [254/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:33.352 [255/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:33.352 [256/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:33.352 [257/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.352 [258/718] Linking static target lib/librte_reorder.a 00:01:33.352 [259/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:33.352 [260/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:33.352 [261/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.352 [262/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.352 [263/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:33.352 [264/718] Linking static target lib/librte_power.a 00:01:33.352 [265/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:33.352 [266/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:33.352 [267/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:33.352 [268/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:33.352 [269/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.352 [270/718] Linking static target lib/librte_rawdev.a 00:01:33.352 [271/718] Linking static target lib/librte_rcu.a 00:01:33.618 [272/718] Linking static target lib/librte_security.a 00:01:33.618 [273/718] Linking static target lib/librte_bpf.a 00:01:33.618 [274/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:33.618 [275/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:33.618 [276/718] Linking static target lib/librte_eal.a 00:01:33.618 [277/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:33.618 [278/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [279/718] Linking static target lib/librte_mldev.a 00:01:33.618 [280/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [281/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [282/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:33.618 [283/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:33.618 [284/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [285/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:33.618 [286/718] Linking static target lib/librte_ip_frag.a 00:01:33.618 [287/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:33.618 [288/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:33.618 [289/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:33.618 [290/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:33.618 [291/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [292/718] Linking static target lib/librte_pcapng.a 00:01:33.618 [293/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:33.618 [294/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:33.618 [295/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:33.618 [296/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:33.618 [297/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:33.618 [298/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:33.618 [299/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:33.618 [300/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [301/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.618 [302/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:33.618 [303/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:33.618 [304/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:33.618 [305/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:33.618 [306/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:33.882 [307/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:33.882 [308/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:33.882 [309/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:33.882 [310/718] Linking static target lib/librte_rib.a 00:01:33.882 [311/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:33.882 [312/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:33.882 [313/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:33.882 [314/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:33.882 [315/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:33.882 [316/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:33.882 [317/718] Linking static target lib/librte_efd.a 00:01:33.882 [318/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:33.882 [319/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:33.882 [320/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [321/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:33.882 [322/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:33.882 [323/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [324/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:33.882 [325/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:33.882 [326/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:33.882 [327/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:33.882 [328/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:33.882 [329/718] Linking static target lib/librte_lpm.a 00:01:33.882 [330/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:33.882 [331/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:33.882 [332/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:33.882 [333/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:33.882 [334/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:33.882 [335/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:33.882 [336/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:33.882 [337/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [338/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [339/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:33.882 [340/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:33.882 [341/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [342/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [343/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:33.882 [344/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:33.882 [345/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [346/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.882 [347/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:34.146 [348/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:34.146 [349/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:34.146 [350/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:34.146 [351/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:34.146 [352/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:34.146 [353/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [354/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [355/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:34.146 [356/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:34.146 [357/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:34.146 [358/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:34.146 [359/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:34.146 [360/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:34.146 [361/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:34.146 [362/718] Linking target lib/librte_telemetry.so.25.0 00:01:34.146 [363/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:34.146 [364/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:34.146 [365/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:34.146 [366/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:34.146 [367/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:34.146 [368/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:34.146 [369/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:34.146 [370/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:34.146 [371/718] Linking static target lib/librte_fib.a 00:01:34.146 [372/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [373/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:34.146 [374/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [375/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:34.146 [376/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [377/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [378/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.146 [379/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:34.146 [380/718] Linking static target lib/librte_graph.a 00:01:34.146 [381/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:34.146 [382/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:34.146 [383/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:34.405 [384/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:34.405 [385/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:01:34.405 [386/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:34.405 [387/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:34.405 [388/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:34.405 [389/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:34.405 [390/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:34.405 [391/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:34.406 [392/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:34.406 [393/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:34.406 [394/718] Linking static target lib/librte_pdump.a 00:01:34.406 [395/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:34.406 [396/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:34.406 [397/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:34.406 [398/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:34.406 [399/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:34.406 [400/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:34.406 [401/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.406 [402/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:34.406 [403/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.406 [404/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:34.406 [405/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:34.406 [406/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.406 [407/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:34.406 [408/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:34.406 [409/718] Linking static target drivers/librte_bus_vdev.a 00:01:34.406 [410/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:34.406 [411/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:34.406 [412/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:34.406 [413/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:34.406 [414/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:34.406 [415/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:34.406 [416/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.664 [417/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:34.664 [418/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:34.664 [419/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:34.664 [420/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:34.664 [421/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.664 [422/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:34.664 [423/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:34.664 [424/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:34.664 [425/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:34.664 [426/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.664 [427/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:34.664 [428/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:34.664 [429/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:34.664 [430/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:34.664 [431/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:34.664 [432/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:34.664 [433/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:34.664 [434/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:34.664 [435/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:34.664 [436/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:34.664 [437/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:34.664 [438/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.664 [439/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.664 [440/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:34.664 [441/718] Linking static target lib/librte_table.a 00:01:34.664 [442/718] Linking static target drivers/librte_bus_pci.a 00:01:34.664 [443/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:34.664 [444/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:34.664 [445/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:34.664 [446/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:34.664 [447/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:34.664 [448/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:34.664 [449/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:34.664 [450/718] Linking static target lib/librte_sched.a 00:01:34.664 [451/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:34.664 [452/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:34.664 [453/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:34.664 [454/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:34.664 [455/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:34.664 [456/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:34.664 [457/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:34.664 [458/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.664 [459/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:34.664 [460/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:34.664 [461/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:34.664 [462/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:34.664 [463/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:34.924 [464/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.924 [465/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:34.924 [466/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:34.924 [467/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:34.924 [468/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:34.924 [469/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:34.924 [470/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:34.924 [471/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:34.924 [472/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:34.924 [473/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:34.924 [474/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:34.924 [475/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:34.924 [476/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:34.924 [477/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:34.924 [478/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:34.924 [479/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:34.924 [480/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:34.924 [481/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:34.924 [482/718] Linking static target lib/librte_cryptodev.a 00:01:34.924 [483/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:34.924 [484/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:34.924 [485/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:34.924 [486/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:34.924 [487/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:34.924 [488/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:34.924 [489/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:34.924 [490/718] Linking static target lib/librte_node.a 00:01:34.924 [491/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:34.924 [492/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:34.924 [493/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:34.924 [494/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:34.924 [495/718] Linking static target lib/librte_member.a 00:01:34.924 [496/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:34.924 [497/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:34.924 [498/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.924 [499/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:34.924 [500/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:34.924 [501/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:34.924 [502/718] Linking static target drivers/librte_mempool_ring.a 00:01:34.924 [503/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:34.924 [504/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.924 [505/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:34.924 [506/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:34.924 [507/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:34.924 [508/718] Linking static target lib/librte_ipsec.a 00:01:34.924 [509/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:34.924 [510/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:34.924 [511/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:34.924 [512/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:35.187 [513/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:35.187 [514/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:35.187 [515/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:35.187 [516/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:35.187 [517/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:35.187 [518/718] Linking static target lib/librte_pdcp.a 00:01:35.187 [519/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:35.187 [520/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.187 [521/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:35.187 [522/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:35.187 [523/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:35.187 [524/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:35.187 [525/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:35.187 [526/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:35.187 [527/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:35.187 [528/718] Linking static target lib/acl/libavx2_tmp.a 00:01:35.187 [529/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:35.187 [530/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:35.187 [531/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:35.187 [532/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:35.187 [533/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.187 [534/718] Linking static target lib/librte_hash.a 00:01:35.187 [535/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:35.187 [536/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:35.187 [537/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:35.187 [538/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:35.187 [539/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.187 [540/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:35.187 [541/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:35.187 [542/718] Linking static target lib/librte_port.a 00:01:35.187 [543/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:35.187 [544/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:35.187 [545/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:35.187 [546/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:35.448 [547/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:35.448 [548/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:35.448 [549/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:35.448 [550/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:35.448 [551/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.448 [552/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:35.448 [553/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.448 [554/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:35.448 [555/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:35.448 [556/718] Linking static target lib/librte_eventdev.a 00:01:35.448 [557/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:35.448 [558/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:35.448 [559/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:35.448 [560/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:35.448 [561/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:35.448 [562/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.448 [563/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:35.448 [564/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:35.448 [565/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:35.448 [566/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:35.448 [567/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:35.448 [568/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.448 [569/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:35.448 [570/718] Linking static target lib/librte_acl.a 00:01:35.448 [571/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:35.710 [572/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:35.710 [573/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.710 [574/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:35.710 [575/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.710 [576/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:35.710 [577/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:35.710 [578/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:35.972 [579/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:35.972 [580/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:35.972 [581/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.233 [582/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.233 [583/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.233 [584/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:36.233 [585/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:36.496 [586/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.496 [587/718] Linking static target lib/librte_ethdev.a 00:01:36.496 [588/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:36.758 [589/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:36.758 [590/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:37.018 [591/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.279 [592/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:37.279 [593/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:37.279 [594/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:37.540 [595/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:37.540 [596/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:37.540 [597/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:37.540 [598/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:37.540 [599/718] Linking static target drivers/librte_net_i40e.a 00:01:38.924 [600/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:38.924 [601/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.924 [602/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:38.924 [603/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.127 [604/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:43.127 [605/718] Linking static target lib/librte_pipeline.a 00:01:44.519 [606/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.519 [607/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.519 [608/718] Linking target lib/librte_eal.so.25.0 00:01:44.779 [609/718] Linking static target lib/librte_vhost.a 00:01:44.779 [610/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:01:44.779 [611/718] Linking target lib/librte_ring.so.25.0 00:01:44.780 [612/718] Linking target lib/librte_meter.so.25.0 00:01:44.780 [613/718] Linking target lib/librte_timer.so.25.0 00:01:44.780 [614/718] Linking target lib/librte_pci.so.25.0 00:01:44.780 [615/718] Linking target lib/librte_jobstats.so.25.0 00:01:44.780 [616/718] Linking target lib/librte_dmadev.so.25.0 00:01:44.780 [617/718] Linking target lib/librte_cfgfile.so.25.0 00:01:44.780 [618/718] Linking target lib/librte_rawdev.so.25.0 00:01:44.780 [619/718] Linking target lib/librte_stack.so.25.0 00:01:44.780 [620/718] Linking target drivers/librte_bus_vdev.so.25.0 00:01:44.780 [621/718] Linking target lib/librte_acl.so.25.0 00:01:45.039 [622/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:01:45.039 [623/718] Linking target app/dpdk-test-fib 00:01:45.039 [624/718] Linking target app/dpdk-graph 00:01:45.039 [625/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:01:45.039 [626/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:01:45.039 [627/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:01:45.039 [628/718] Linking target app/dpdk-dumpcap 00:01:45.039 [629/718] Linking target app/dpdk-test-gpudev 00:01:45.039 [630/718] Linking target app/dpdk-test-regex 00:01:45.039 [631/718] Linking target app/dpdk-test-flow-perf 00:01:45.039 [632/718] Linking target app/dpdk-test-mldev 00:01:45.039 [633/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:01:45.039 [634/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:01:45.039 [635/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:01:45.039 [636/718] Linking target drivers/librte_bus_pci.so.25.0 00:01:45.039 [637/718] Linking target app/dpdk-proc-info 00:01:45.039 [638/718] Linking target app/dpdk-test-compress-perf 00:01:45.039 [639/718] Linking target lib/librte_rcu.so.25.0 00:01:45.039 [640/718] Linking target app/dpdk-test-sad 00:01:45.039 [641/718] Linking target lib/librte_mempool.so.25.0 00:01:45.039 [642/718] Linking target app/dpdk-pdump 00:01:45.039 [643/718] Linking target app/dpdk-test-cmdline 00:01:45.039 [644/718] Linking target app/dpdk-test-acl 00:01:45.039 [645/718] Linking target app/dpdk-test-dma-perf 00:01:45.039 [646/718] Linking target app/dpdk-test-pipeline 00:01:45.039 [647/718] Linking target app/dpdk-test-security-perf 00:01:45.039 [648/718] Linking target app/dpdk-test-crypto-perf 00:01:45.039 [649/718] Linking target app/dpdk-test-bbdev 00:01:45.039 [650/718] Linking target app/dpdk-test-eventdev 00:01:45.039 [651/718] Linking target app/dpdk-testpmd 00:01:45.039 [652/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:01:45.336 [653/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:01:45.336 [654/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:01:45.336 [655/718] Linking target drivers/librte_mempool_ring.so.25.0 00:01:45.336 [656/718] Linking target lib/librte_mbuf.so.25.0 00:01:45.336 [657/718] Linking target lib/librte_rib.so.25.0 00:01:45.336 [658/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:01:45.336 [659/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.336 [660/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:01:45.336 [661/718] Linking target lib/librte_sched.so.25.0 00:01:45.623 [662/718] Linking target lib/librte_bbdev.so.25.0 00:01:45.623 [663/718] Linking target lib/librte_compressdev.so.25.0 00:01:45.623 [664/718] Linking target lib/librte_gpudev.so.25.0 00:01:45.623 [665/718] Linking target lib/librte_net.so.25.0 00:01:45.623 [666/718] Linking target lib/librte_distributor.so.25.0 00:01:45.623 [667/718] Linking target lib/librte_reorder.so.25.0 00:01:45.623 [668/718] Linking target lib/librte_regexdev.so.25.0 00:01:45.623 [669/718] Linking target lib/librte_mldev.so.25.0 00:01:45.623 [670/718] Linking target lib/librte_cryptodev.so.25.0 00:01:45.623 [671/718] Linking target lib/librte_fib.so.25.0 00:01:45.623 [672/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:01:45.623 [673/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:01:45.623 [674/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:01:45.623 [675/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:01:45.623 [676/718] Linking target lib/librte_security.so.25.0 00:01:45.623 [677/718] Linking target lib/librte_cmdline.so.25.0 00:01:45.623 [678/718] Linking target lib/librte_hash.so.25.0 00:01:45.623 [679/718] Linking target lib/librte_ethdev.so.25.0 00:01:45.926 [680/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:01:45.926 [681/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:01:45.926 [682/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:01:45.926 [683/718] Linking target lib/librte_pdcp.so.25.0 00:01:45.926 [684/718] Linking target lib/librte_lpm.so.25.0 00:01:45.926 [685/718] Linking target lib/librte_efd.so.25.0 00:01:45.926 [686/718] Linking target lib/librte_gro.so.25.0 00:01:45.926 [687/718] Linking target lib/librte_metrics.so.25.0 00:01:45.926 [688/718] Linking target lib/librte_member.so.25.0 00:01:45.926 [689/718] Linking target lib/librte_ipsec.so.25.0 00:01:45.926 [690/718] Linking target lib/librte_gso.so.25.0 00:01:45.926 [691/718] Linking target lib/librte_pcapng.so.25.0 00:01:45.926 [692/718] Linking target lib/librte_bpf.so.25.0 00:01:45.926 [693/718] Linking target lib/librte_power.so.25.0 00:01:45.927 [694/718] Linking target lib/librte_ip_frag.so.25.0 00:01:45.927 [695/718] Linking target lib/librte_eventdev.so.25.0 00:01:45.927 [696/718] Linking target drivers/librte_net_i40e.so.25.0 00:01:45.927 [697/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:01:45.927 [698/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:01:45.927 [699/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:01:45.927 [700/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:01:45.927 [701/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:01:45.927 [702/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:01:45.927 [703/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:01:45.927 [704/718] Linking target lib/librte_bitratestats.so.25.0 00:01:45.927 [705/718] Linking target lib/librte_latencystats.so.25.0 00:01:45.927 [706/718] Linking target lib/librte_dispatcher.so.25.0 00:01:46.191 [707/718] Linking target lib/librte_port.so.25.0 00:01:46.191 [708/718] Linking target lib/librte_pdump.so.25.0 00:01:46.191 [709/718] Linking target lib/librte_graph.so.25.0 00:01:46.191 [710/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:01:46.191 [711/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:01:46.191 [712/718] Linking target lib/librte_table.so.25.0 00:01:46.191 [713/718] Linking target lib/librte_node.so.25.0 00:01:46.452 [714/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:01:46.713 [715/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.713 [716/718] Linking target lib/librte_vhost.so.25.0 00:01:48.630 [717/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.630 [718/718] Linking target lib/librte_pipeline.so.25.0 00:01:48.630 10:42:08 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:01:48.630 10:42:08 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:48.630 10:42:08 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:48.630 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:48.630 [0/1] Installing files. 00:01:48.895 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:01:48.895 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:48.895 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:48.896 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.897 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.898 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.899 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.900 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:48.901 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:48.901 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.901 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:48.902 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.166 Installing lib/librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.166 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.166 Installing lib/librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.166 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing lib/librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing drivers/librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:49.167 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing drivers/librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:49.167 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing drivers/librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:49.167 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.167 Installing drivers/librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0 00:01:49.167 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.170 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:49.171 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:49.171 Installing symlink pointing to librte_log.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.25 00:01:49.171 Installing symlink pointing to librte_log.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:49.171 Installing symlink pointing to librte_kvargs.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.25 00:01:49.171 Installing symlink pointing to librte_kvargs.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:49.171 Installing symlink pointing to librte_argparse.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.25 00:01:49.171 Installing symlink pointing to librte_argparse.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:01:49.171 Installing symlink pointing to librte_telemetry.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.25 00:01:49.171 Installing symlink pointing to librte_telemetry.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:49.171 Installing symlink pointing to librte_eal.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.25 00:01:49.171 Installing symlink pointing to librte_eal.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:49.171 Installing symlink pointing to librte_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.25 00:01:49.171 Installing symlink pointing to librte_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:49.171 Installing symlink pointing to librte_rcu.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.25 00:01:49.171 Installing symlink pointing to librte_rcu.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:49.171 Installing symlink pointing to librte_mempool.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.25 00:01:49.171 Installing symlink pointing to librte_mempool.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:49.171 Installing symlink pointing to librte_mbuf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.25 00:01:49.172 Installing symlink pointing to librte_mbuf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:49.172 Installing symlink pointing to librte_net.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.25 00:01:49.172 Installing symlink pointing to librte_net.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:49.172 Installing symlink pointing to librte_meter.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.25 00:01:49.172 Installing symlink pointing to librte_meter.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:49.172 Installing symlink pointing to librte_ethdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.25 00:01:49.172 Installing symlink pointing to librte_ethdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:49.172 Installing symlink pointing to librte_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.25 00:01:49.172 Installing symlink pointing to librte_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:49.172 Installing symlink pointing to librte_cmdline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.25 00:01:49.172 Installing symlink pointing to librte_cmdline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:49.172 Installing symlink pointing to librte_metrics.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.25 00:01:49.172 Installing symlink pointing to librte_metrics.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:49.172 Installing symlink pointing to librte_hash.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.25 00:01:49.172 Installing symlink pointing to librte_hash.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:49.172 Installing symlink pointing to librte_timer.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.25 00:01:49.172 Installing symlink pointing to librte_timer.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:49.172 Installing symlink pointing to librte_acl.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.25 00:01:49.172 Installing symlink pointing to librte_acl.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:49.172 Installing symlink pointing to librte_bbdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.25 00:01:49.172 Installing symlink pointing to librte_bbdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:49.172 Installing symlink pointing to librte_bitratestats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.25 00:01:49.172 Installing symlink pointing to librte_bitratestats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:49.172 Installing symlink pointing to librte_bpf.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.25 00:01:49.172 Installing symlink pointing to librte_bpf.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:49.172 Installing symlink pointing to librte_cfgfile.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.25 00:01:49.172 Installing symlink pointing to librte_cfgfile.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:49.172 Installing symlink pointing to librte_compressdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.25 00:01:49.172 Installing symlink pointing to librte_compressdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:49.172 Installing symlink pointing to librte_cryptodev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.25 00:01:49.172 Installing symlink pointing to librte_cryptodev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:49.172 Installing symlink pointing to librte_distributor.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.25 00:01:49.172 Installing symlink pointing to librte_distributor.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:49.172 Installing symlink pointing to librte_dmadev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.25 00:01:49.172 Installing symlink pointing to librte_dmadev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:49.172 Installing symlink pointing to librte_efd.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.25 00:01:49.172 Installing symlink pointing to librte_efd.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:49.172 Installing symlink pointing to librte_eventdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.25 00:01:49.172 Installing symlink pointing to librte_eventdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:49.172 Installing symlink pointing to librte_dispatcher.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.25 00:01:49.172 Installing symlink pointing to librte_dispatcher.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:49.172 Installing symlink pointing to librte_gpudev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.25 00:01:49.172 Installing symlink pointing to librte_gpudev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:49.172 Installing symlink pointing to librte_gro.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.25 00:01:49.172 Installing symlink pointing to librte_gro.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:49.172 Installing symlink pointing to librte_gso.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.25 00:01:49.172 Installing symlink pointing to librte_gso.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:49.172 Installing symlink pointing to librte_ip_frag.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.25 00:01:49.172 Installing symlink pointing to librte_ip_frag.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:49.172 Installing symlink pointing to librte_jobstats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.25 00:01:49.172 Installing symlink pointing to librte_jobstats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:49.172 Installing symlink pointing to librte_latencystats.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.25 00:01:49.172 Installing symlink pointing to librte_latencystats.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:49.172 Installing symlink pointing to librte_lpm.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.25 00:01:49.172 Installing symlink pointing to librte_lpm.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:49.172 Installing symlink pointing to librte_member.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.25 00:01:49.172 Installing symlink pointing to librte_member.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:49.172 Installing symlink pointing to librte_pcapng.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.25 00:01:49.172 Installing symlink pointing to librte_pcapng.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:49.172 Installing symlink pointing to librte_power.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.25 00:01:49.172 Installing symlink pointing to librte_power.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:49.172 Installing symlink pointing to librte_rawdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.25 00:01:49.172 Installing symlink pointing to librte_rawdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:49.172 Installing symlink pointing to librte_regexdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.25 00:01:49.172 Installing symlink pointing to librte_regexdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:49.172 Installing symlink pointing to librte_mldev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.25 00:01:49.172 Installing symlink pointing to librte_mldev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:49.172 Installing symlink pointing to librte_rib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.25 00:01:49.172 Installing symlink pointing to librte_rib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:49.172 Installing symlink pointing to librte_reorder.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.25 00:01:49.172 Installing symlink pointing to librte_reorder.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:49.172 Installing symlink pointing to librte_sched.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.25 00:01:49.172 Installing symlink pointing to librte_sched.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:49.172 Installing symlink pointing to librte_security.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.25 00:01:49.172 Installing symlink pointing to librte_security.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:49.172 Installing symlink pointing to librte_stack.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.25 00:01:49.172 Installing symlink pointing to librte_stack.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:49.172 Installing symlink pointing to librte_vhost.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.25 00:01:49.172 Installing symlink pointing to librte_vhost.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:49.172 Installing symlink pointing to librte_ipsec.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.25 00:01:49.172 Installing symlink pointing to librte_ipsec.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:49.172 Installing symlink pointing to librte_pdcp.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.25 00:01:49.172 Installing symlink pointing to librte_pdcp.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:49.172 Installing symlink pointing to librte_fib.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.25 00:01:49.172 Installing symlink pointing to librte_fib.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:49.172 Installing symlink pointing to librte_port.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.25 00:01:49.172 Installing symlink pointing to librte_port.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:49.173 Installing symlink pointing to librte_pdump.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.25 00:01:49.173 Installing symlink pointing to librte_pdump.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:49.173 Installing symlink pointing to librte_table.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.25 00:01:49.173 Installing symlink pointing to librte_table.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:49.173 Installing symlink pointing to librte_pipeline.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.25 00:01:49.173 Installing symlink pointing to librte_pipeline.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:49.173 Installing symlink pointing to librte_graph.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.25 00:01:49.173 Installing symlink pointing to librte_graph.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:49.173 Installing symlink pointing to librte_node.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.25 00:01:49.173 Installing symlink pointing to librte_node.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:49.173 Installing symlink pointing to librte_bus_pci.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:01:49.173 Installing symlink pointing to librte_bus_pci.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:01:49.173 Installing symlink pointing to librte_bus_vdev.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:01:49.173 Installing symlink pointing to librte_bus_vdev.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:01:49.173 Installing symlink pointing to librte_mempool_ring.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:01:49.173 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:01:49.173 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:01:49.173 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:01:49.173 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:01:49.173 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:01:49.173 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:01:49.173 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:01:49.173 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:01:49.173 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:01:49.173 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:01:49.173 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:01:49.173 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:01:49.173 Installing symlink pointing to librte_mempool_ring.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:01:49.173 Installing symlink pointing to librte_net_i40e.so.25.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:01:49.173 Installing symlink pointing to librte_net_i40e.so.25 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:01:49.173 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:01:49.173 10:42:09 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:01:49.173 10:42:09 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.173 00:01:49.173 real 0m24.781s 00:01:49.173 user 7m19.981s 00:01:49.173 sys 2m48.334s 00:01:49.173 10:42:09 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:49.173 10:42:09 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:49.173 ************************************ 00:01:49.173 END TEST build_native_dpdk 00:01:49.173 ************************************ 00:01:49.435 10:42:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:49.435 10:42:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.435 10:42:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:49.435 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:49.697 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:49.697 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:49.697 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.958 Using 'verbs' RDMA provider 00:02:05.815 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:18.062 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:18.062 Creating mk/config.mk...done. 00:02:18.062 Creating mk/cc.flags.mk...done. 00:02:18.062 Type 'make' to build. 00:02:18.062 10:42:37 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:18.062 10:42:37 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:18.062 10:42:37 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:18.062 10:42:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.062 ************************************ 00:02:18.062 START TEST make 00:02:18.062 ************************************ 00:02:18.062 10:42:37 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:18.322 make[1]: Nothing to be done for 'all'. 00:02:19.702 The Meson build system 00:02:19.702 Version: 1.5.0 00:02:19.702 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:19.702 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.702 Build type: native build 00:02:19.702 Project name: libvfio-user 00:02:19.702 Project version: 0.0.1 00:02:19.702 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.702 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:19.702 Host machine cpu family: x86_64 00:02:19.702 Host machine cpu: x86_64 00:02:19.702 Run-time dependency threads found: YES 00:02:19.702 Library dl found: YES 00:02:19.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.702 Run-time dependency json-c found: YES 0.17 00:02:19.702 Run-time dependency cmocka found: YES 1.1.7 00:02:19.702 Program pytest-3 found: NO 00:02:19.702 Program flake8 found: NO 00:02:19.702 Program misspell-fixer found: NO 00:02:19.702 Program restructuredtext-lint found: NO 00:02:19.702 Program valgrind found: YES (/usr/bin/valgrind) 00:02:19.702 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.702 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.702 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.702 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:19.703 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:19.703 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:19.703 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:19.703 Build targets in project: 8 00:02:19.703 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:19.703 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:19.703 00:02:19.703 libvfio-user 0.0.1 00:02:19.703 00:02:19.703 User defined options 00:02:19.703 buildtype : debug 00:02:19.703 default_library: shared 00:02:19.703 libdir : /usr/local/lib 00:02:19.703 00:02:19.703 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.703 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.962 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:19.962 [2/37] Compiling C object samples/null.p/null.c.o 00:02:19.962 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:19.962 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:19.962 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:19.962 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:19.962 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:19.962 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:19.962 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:19.962 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:19.962 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:19.962 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:19.962 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:19.962 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:19.962 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:19.962 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:19.962 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:19.962 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:19.962 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:19.962 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:19.962 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:19.962 [22/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:19.962 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:19.962 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:19.962 [25/37] Compiling C object samples/server.p/server.c.o 00:02:19.962 [26/37] Compiling C object samples/client.p/client.c.o 00:02:19.962 [27/37] Linking target samples/client 00:02:19.962 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:19.962 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:19.962 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:19.962 [31/37] Linking target test/unit_tests 00:02:20.224 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:20.224 [33/37] Linking target samples/server 00:02:20.224 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:20.224 [35/37] Linking target samples/null 00:02:20.224 [36/37] Linking target samples/gpio-pci-idio-16 00:02:20.224 [37/37] Linking target samples/lspci 00:02:20.224 INFO: autodetecting backend as ninja 00:02:20.224 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:20.224 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:20.797 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:20.797 ninja: no work to do. 00:02:47.394 CC lib/ut/ut.o 00:02:47.394 CC lib/ut_mock/mock.o 00:02:47.394 CC lib/log/log.o 00:02:47.394 CC lib/log/log_flags.o 00:02:47.394 CC lib/log/log_deprecated.o 00:02:47.394 LIB libspdk_ut.a 00:02:47.394 SO libspdk_ut.so.2.0 00:02:47.394 LIB libspdk_ut_mock.a 00:02:47.394 LIB libspdk_log.a 00:02:47.394 SYMLINK libspdk_ut.so 00:02:47.394 SO libspdk_ut_mock.so.6.0 00:02:47.394 SO libspdk_log.so.7.0 00:02:47.394 SYMLINK libspdk_ut_mock.so 00:02:47.394 SYMLINK libspdk_log.so 00:02:47.394 CC lib/util/base64.o 00:02:47.394 CC lib/util/bit_array.o 00:02:47.394 CC lib/util/cpuset.o 00:02:47.394 CC lib/util/crc16.o 00:02:47.394 CC lib/util/crc32.o 00:02:47.394 CC lib/util/crc32c.o 00:02:47.394 CC lib/util/crc32_ieee.o 00:02:47.394 CXX lib/trace_parser/trace.o 00:02:47.394 CC lib/util/crc64.o 00:02:47.394 CC lib/util/dif.o 00:02:47.394 CC lib/util/fd.o 00:02:47.394 CC lib/util/fd_group.o 00:02:47.394 CC lib/util/file.o 00:02:47.394 CC lib/util/hexlify.o 00:02:47.394 CC lib/util/iov.o 00:02:47.394 CC lib/util/math.o 00:02:47.394 CC lib/util/net.o 00:02:47.394 CC lib/util/pipe.o 00:02:47.394 CC lib/util/strerror_tls.o 00:02:47.394 CC lib/util/string.o 00:02:47.394 CC lib/util/uuid.o 00:02:47.394 CC lib/ioat/ioat.o 00:02:47.394 CC lib/util/xor.o 00:02:47.394 CC lib/util/zipf.o 00:02:47.394 CC lib/util/md5.o 00:02:47.394 CC lib/dma/dma.o 00:02:47.394 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.394 CC lib/vfio_user/host/vfio_user.o 00:02:47.394 LIB libspdk_dma.a 00:02:47.394 SO libspdk_dma.so.5.0 00:02:47.394 LIB libspdk_ioat.a 00:02:47.394 SO libspdk_ioat.so.7.0 00:02:47.394 SYMLINK libspdk_dma.so 00:02:47.394 SYMLINK libspdk_ioat.so 00:02:47.394 LIB libspdk_vfio_user.a 00:02:47.394 LIB libspdk_util.a 00:02:47.394 SO libspdk_vfio_user.so.5.0 00:02:47.394 SO libspdk_util.so.10.0 00:02:47.394 SYMLINK libspdk_vfio_user.so 00:02:47.394 SYMLINK libspdk_util.so 00:02:47.394 LIB libspdk_trace_parser.a 00:02:47.394 SO libspdk_trace_parser.so.6.0 00:02:47.394 SYMLINK libspdk_trace_parser.so 00:02:47.394 CC lib/idxd/idxd.o 00:02:47.394 CC lib/idxd/idxd_kernel.o 00:02:47.394 CC lib/idxd/idxd_user.o 00:02:47.394 CC lib/rdma_utils/rdma_utils.o 00:02:47.394 CC lib/conf/conf.o 00:02:47.394 CC lib/json/json_parse.o 00:02:47.394 CC lib/json/json_util.o 00:02:47.394 CC lib/vmd/vmd.o 00:02:47.394 CC lib/rdma_provider/common.o 00:02:47.394 CC lib/json/json_write.o 00:02:47.394 CC lib/vmd/led.o 00:02:47.395 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:47.395 CC lib/env_dpdk/env.o 00:02:47.395 CC lib/env_dpdk/memory.o 00:02:47.395 CC lib/env_dpdk/pci.o 00:02:47.395 CC lib/env_dpdk/init.o 00:02:47.395 CC lib/env_dpdk/threads.o 00:02:47.395 CC lib/env_dpdk/pci_ioat.o 00:02:47.395 CC lib/env_dpdk/pci_virtio.o 00:02:47.395 CC lib/env_dpdk/pci_vmd.o 00:02:47.395 CC lib/env_dpdk/pci_idxd.o 00:02:47.395 CC lib/env_dpdk/pci_event.o 00:02:47.395 CC lib/env_dpdk/sigbus_handler.o 00:02:47.395 CC lib/env_dpdk/pci_dpdk.o 00:02:47.395 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.395 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.395 LIB libspdk_rdma_provider.a 00:02:47.395 LIB libspdk_conf.a 00:02:47.395 SO libspdk_rdma_provider.so.6.0 00:02:47.395 SO libspdk_conf.so.6.0 00:02:47.395 LIB libspdk_rdma_utils.a 00:02:47.395 SO libspdk_rdma_utils.so.1.0 00:02:47.395 LIB libspdk_json.a 00:02:47.395 SYMLINK libspdk_rdma_provider.so 00:02:47.395 SYMLINK libspdk_conf.so 00:02:47.395 SO libspdk_json.so.6.0 00:02:47.395 SYMLINK libspdk_rdma_utils.so 00:02:47.395 SYMLINK libspdk_json.so 00:02:47.395 LIB libspdk_idxd.a 00:02:47.395 SO libspdk_idxd.so.12.1 00:02:47.395 LIB libspdk_vmd.a 00:02:47.395 SYMLINK libspdk_idxd.so 00:02:47.395 SO libspdk_vmd.so.6.0 00:02:47.395 SYMLINK libspdk_vmd.so 00:02:47.395 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.395 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.395 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.395 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.395 LIB libspdk_jsonrpc.a 00:02:47.395 SO libspdk_jsonrpc.so.6.0 00:02:47.395 SYMLINK libspdk_jsonrpc.so 00:02:47.395 LIB libspdk_env_dpdk.a 00:02:47.395 SO libspdk_env_dpdk.so.15.0 00:02:47.395 SYMLINK libspdk_env_dpdk.so 00:02:47.395 CC lib/rpc/rpc.o 00:02:47.395 LIB libspdk_rpc.a 00:02:47.395 SO libspdk_rpc.so.6.0 00:02:47.395 SYMLINK libspdk_rpc.so 00:02:47.395 CC lib/notify/notify.o 00:02:47.395 CC lib/notify/notify_rpc.o 00:02:47.395 CC lib/trace/trace.o 00:02:47.395 CC lib/trace/trace_flags.o 00:02:47.395 CC lib/trace/trace_rpc.o 00:02:47.395 CC lib/keyring/keyring.o 00:02:47.395 CC lib/keyring/keyring_rpc.o 00:02:47.395 LIB libspdk_notify.a 00:02:47.395 SO libspdk_notify.so.6.0 00:02:47.395 LIB libspdk_keyring.a 00:02:47.395 LIB libspdk_trace.a 00:02:47.395 SYMLINK libspdk_notify.so 00:02:47.395 SO libspdk_keyring.so.2.0 00:02:47.395 SO libspdk_trace.so.11.0 00:02:47.395 SYMLINK libspdk_keyring.so 00:02:47.395 SYMLINK libspdk_trace.so 00:02:47.656 CC lib/sock/sock.o 00:02:47.656 CC lib/sock/sock_rpc.o 00:02:47.656 CC lib/thread/thread.o 00:02:47.656 CC lib/thread/iobuf.o 00:02:47.918 LIB libspdk_sock.a 00:02:47.918 SO libspdk_sock.so.10.0 00:02:48.179 SYMLINK libspdk_sock.so 00:02:48.441 CC lib/nvme/nvme_ctrlr.o 00:02:48.441 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.441 CC lib/nvme/nvme_ns_cmd.o 00:02:48.441 CC lib/nvme/nvme_fabric.o 00:02:48.441 CC lib/nvme/nvme_ns.o 00:02:48.441 CC lib/nvme/nvme_qpair.o 00:02:48.441 CC lib/nvme/nvme_pcie_common.o 00:02:48.441 CC lib/nvme/nvme_pcie.o 00:02:48.441 CC lib/nvme/nvme.o 00:02:48.441 CC lib/nvme/nvme_quirks.o 00:02:48.441 CC lib/nvme/nvme_transport.o 00:02:48.441 CC lib/nvme/nvme_discovery.o 00:02:48.441 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.441 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.441 CC lib/nvme/nvme_tcp.o 00:02:48.441 CC lib/nvme/nvme_opal.o 00:02:48.441 CC lib/nvme/nvme_io_msg.o 00:02:48.441 CC lib/nvme/nvme_poll_group.o 00:02:48.441 CC lib/nvme/nvme_zns.o 00:02:48.441 CC lib/nvme/nvme_stubs.o 00:02:48.441 CC lib/nvme/nvme_auth.o 00:02:48.441 CC lib/nvme/nvme_cuse.o 00:02:48.441 CC lib/nvme/nvme_vfio_user.o 00:02:48.441 CC lib/nvme/nvme_rdma.o 00:02:49.014 LIB libspdk_thread.a 00:02:49.014 SO libspdk_thread.so.10.2 00:02:49.014 SYMLINK libspdk_thread.so 00:02:49.274 CC lib/blob/blobstore.o 00:02:49.274 CC lib/blob/request.o 00:02:49.274 CC lib/init/json_config.o 00:02:49.274 CC lib/blob/blob_bs_dev.o 00:02:49.274 CC lib/init/subsystem.o 00:02:49.274 CC lib/blob/zeroes.o 00:02:49.274 CC lib/init/subsystem_rpc.o 00:02:49.274 CC lib/init/rpc.o 00:02:49.274 CC lib/accel/accel_rpc.o 00:02:49.274 CC lib/accel/accel.o 00:02:49.274 CC lib/accel/accel_sw.o 00:02:49.274 CC lib/vfu_tgt/tgt_endpoint.o 00:02:49.274 CC lib/vfu_tgt/tgt_rpc.o 00:02:49.274 CC lib/virtio/virtio.o 00:02:49.274 CC lib/fsdev/fsdev.o 00:02:49.274 CC lib/virtio/virtio_vhost_user.o 00:02:49.274 CC lib/virtio/virtio_vfio_user.o 00:02:49.274 CC lib/fsdev/fsdev_io.o 00:02:49.274 CC lib/fsdev/fsdev_rpc.o 00:02:49.274 CC lib/virtio/virtio_pci.o 00:02:49.533 LIB libspdk_init.a 00:02:49.533 SO libspdk_init.so.6.0 00:02:49.794 LIB libspdk_vfu_tgt.a 00:02:49.794 LIB libspdk_virtio.a 00:02:49.794 SO libspdk_vfu_tgt.so.3.0 00:02:49.794 SYMLINK libspdk_init.so 00:02:49.794 SO libspdk_virtio.so.7.0 00:02:49.794 SYMLINK libspdk_vfu_tgt.so 00:02:49.794 SYMLINK libspdk_virtio.so 00:02:50.056 LIB libspdk_fsdev.a 00:02:50.056 SO libspdk_fsdev.so.1.0 00:02:50.056 CC lib/event/app.o 00:02:50.056 CC lib/event/reactor.o 00:02:50.056 CC lib/event/log_rpc.o 00:02:50.056 CC lib/event/app_rpc.o 00:02:50.056 CC lib/event/scheduler_static.o 00:02:50.056 SYMLINK libspdk_fsdev.so 00:02:50.317 LIB libspdk_nvme.a 00:02:50.317 LIB libspdk_accel.a 00:02:50.317 SO libspdk_nvme.so.14.0 00:02:50.317 SO libspdk_accel.so.16.0 00:02:50.317 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:50.578 SYMLINK libspdk_accel.so 00:02:50.578 LIB libspdk_event.a 00:02:50.578 SO libspdk_event.so.15.0 00:02:50.578 SYMLINK libspdk_nvme.so 00:02:50.578 SYMLINK libspdk_event.so 00:02:50.840 CC lib/bdev/bdev.o 00:02:50.840 CC lib/bdev/bdev_rpc.o 00:02:50.840 CC lib/bdev/bdev_zone.o 00:02:50.840 CC lib/bdev/part.o 00:02:50.840 CC lib/bdev/scsi_nvme.o 00:02:51.102 LIB libspdk_fuse_dispatcher.a 00:02:51.102 SO libspdk_fuse_dispatcher.so.1.0 00:02:51.102 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.046 LIB libspdk_blob.a 00:02:52.046 SO libspdk_blob.so.11.0 00:02:52.046 SYMLINK libspdk_blob.so 00:02:52.307 CC lib/lvol/lvol.o 00:02:52.307 CC lib/blobfs/blobfs.o 00:02:52.307 CC lib/blobfs/tree.o 00:02:53.251 LIB libspdk_bdev.a 00:02:53.251 LIB libspdk_blobfs.a 00:02:53.251 SO libspdk_bdev.so.17.0 00:02:53.251 SO libspdk_blobfs.so.10.0 00:02:53.251 SYMLINK libspdk_bdev.so 00:02:53.251 SYMLINK libspdk_blobfs.so 00:02:53.251 LIB libspdk_lvol.a 00:02:53.251 SO libspdk_lvol.so.10.0 00:02:53.251 SYMLINK libspdk_lvol.so 00:02:53.513 CC lib/scsi/dev.o 00:02:53.513 CC lib/scsi/port.o 00:02:53.513 CC lib/scsi/lun.o 00:02:53.513 CC lib/scsi/scsi_bdev.o 00:02:53.513 CC lib/scsi/scsi.o 00:02:53.513 CC lib/scsi/scsi_pr.o 00:02:53.513 CC lib/scsi/task.o 00:02:53.513 CC lib/scsi/scsi_rpc.o 00:02:53.513 CC lib/nvmf/ctrlr.o 00:02:53.513 CC lib/nvmf/ctrlr_discovery.o 00:02:53.513 CC lib/nvmf/ctrlr_bdev.o 00:02:53.513 CC lib/nvmf/subsystem.o 00:02:53.513 CC lib/nvmf/nvmf.o 00:02:53.513 CC lib/nvmf/nvmf_rpc.o 00:02:53.513 CC lib/nvmf/transport.o 00:02:53.513 CC lib/nvmf/tcp.o 00:02:53.513 CC lib/nvmf/stubs.o 00:02:53.513 CC lib/nvmf/mdns_server.o 00:02:53.513 CC lib/nvmf/vfio_user.o 00:02:53.513 CC lib/ftl/ftl_core.o 00:02:53.513 CC lib/nvmf/rdma.o 00:02:53.513 CC lib/ublk/ublk.o 00:02:53.513 CC lib/ftl/ftl_init.o 00:02:53.513 CC lib/ublk/ublk_rpc.o 00:02:53.513 CC lib/nvmf/auth.o 00:02:53.513 CC lib/ftl/ftl_layout.o 00:02:53.513 CC lib/ftl/ftl_debug.o 00:02:53.513 CC lib/ftl/ftl_io.o 00:02:53.513 CC lib/ftl/ftl_sb.o 00:02:53.513 CC lib/nbd/nbd.o 00:02:53.513 CC lib/ftl/ftl_l2p.o 00:02:53.513 CC lib/nbd/nbd_rpc.o 00:02:53.513 CC lib/ftl/ftl_l2p_flat.o 00:02:53.513 CC lib/ftl/ftl_nv_cache.o 00:02:53.513 CC lib/ftl/ftl_band.o 00:02:53.513 CC lib/ftl/ftl_band_ops.o 00:02:53.513 CC lib/ftl/ftl_writer.o 00:02:53.513 CC lib/ftl/ftl_rq.o 00:02:53.513 CC lib/ftl/ftl_reloc.o 00:02:53.513 CC lib/ftl/ftl_l2p_cache.o 00:02:53.513 CC lib/ftl/ftl_p2l.o 00:02:53.513 CC lib/ftl/ftl_p2l_log.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.513 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.513 CC lib/ftl/utils/ftl_conf.o 00:02:53.513 CC lib/ftl/utils/ftl_mempool.o 00:02:53.513 CC lib/ftl/utils/ftl_md.o 00:02:53.772 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.772 CC lib/ftl/utils/ftl_property.o 00:02:53.772 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.772 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.772 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.772 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.772 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.772 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.772 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.772 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.772 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.772 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.772 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.772 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.772 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.772 CC lib/ftl/base/ftl_base_dev.o 00:02:53.772 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.772 CC lib/ftl/ftl_trace.o 00:02:54.032 LIB libspdk_nbd.a 00:02:54.032 SO libspdk_nbd.so.7.0 00:02:54.292 SYMLINK libspdk_nbd.so 00:02:54.292 LIB libspdk_scsi.a 00:02:54.292 LIB libspdk_ublk.a 00:02:54.292 SO libspdk_ublk.so.3.0 00:02:54.292 SO libspdk_scsi.so.9.0 00:02:54.552 SYMLINK libspdk_ublk.so 00:02:54.552 SYMLINK libspdk_scsi.so 00:02:54.552 LIB libspdk_ftl.a 00:02:54.813 SO libspdk_ftl.so.9.0 00:02:54.813 CC lib/iscsi/conn.o 00:02:54.813 CC lib/iscsi/init_grp.o 00:02:54.813 CC lib/iscsi/iscsi.o 00:02:54.813 CC lib/vhost/vhost.o 00:02:54.813 CC lib/iscsi/param.o 00:02:54.813 CC lib/vhost/vhost_rpc.o 00:02:54.813 CC lib/vhost/vhost_scsi.o 00:02:54.813 CC lib/iscsi/portal_grp.o 00:02:54.813 CC lib/iscsi/tgt_node.o 00:02:54.813 CC lib/vhost/vhost_blk.o 00:02:54.813 CC lib/iscsi/iscsi_rpc.o 00:02:54.813 CC lib/vhost/rte_vhost_user.o 00:02:54.813 CC lib/iscsi/iscsi_subsystem.o 00:02:54.813 CC lib/iscsi/task.o 00:02:55.074 SYMLINK libspdk_ftl.so 00:02:55.647 LIB libspdk_nvmf.a 00:02:55.647 SO libspdk_nvmf.so.19.0 00:02:55.647 SYMLINK libspdk_nvmf.so 00:02:55.908 LIB libspdk_vhost.a 00:02:55.908 SO libspdk_vhost.so.8.0 00:02:55.908 SYMLINK libspdk_vhost.so 00:02:55.908 LIB libspdk_iscsi.a 00:02:56.169 SO libspdk_iscsi.so.8.0 00:02:56.169 SYMLINK libspdk_iscsi.so 00:02:56.743 CC module/vfu_device/vfu_virtio.o 00:02:56.743 CC module/vfu_device/vfu_virtio_blk.o 00:02:56.743 CC module/vfu_device/vfu_virtio_scsi.o 00:02:56.743 CC module/vfu_device/vfu_virtio_rpc.o 00:02:56.743 CC module/vfu_device/vfu_virtio_fs.o 00:02:56.743 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.005 CC module/sock/posix/posix.o 00:02:57.005 LIB libspdk_env_dpdk_rpc.a 00:02:57.005 CC module/fsdev/aio/fsdev_aio.o 00:02:57.005 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.005 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.005 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.005 CC module/blob/bdev/blob_bdev.o 00:02:57.005 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.005 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.005 CC module/accel/iaa/accel_iaa.o 00:02:57.005 CC module/accel/ioat/accel_ioat.o 00:02:57.005 CC module/keyring/linux/keyring.o 00:02:57.005 CC module/keyring/linux/keyring_rpc.o 00:02:57.005 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.005 CC module/accel/dsa/accel_dsa.o 00:02:57.005 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.005 CC module/keyring/file/keyring.o 00:02:57.005 CC module/accel/error/accel_error.o 00:02:57.005 CC module/keyring/file/keyring_rpc.o 00:02:57.005 CC module/accel/error/accel_error_rpc.o 00:02:57.005 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.005 SO libspdk_env_dpdk_rpc.so.6.0 00:02:57.005 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.267 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.267 LIB libspdk_keyring_file.a 00:02:57.267 LIB libspdk_keyring_linux.a 00:02:57.267 LIB libspdk_scheduler_gscheduler.a 00:02:57.267 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.267 LIB libspdk_accel_ioat.a 00:02:57.267 LIB libspdk_scheduler_dynamic.a 00:02:57.267 SO libspdk_keyring_linux.so.1.0 00:02:57.267 SO libspdk_keyring_file.so.2.0 00:02:57.267 LIB libspdk_accel_iaa.a 00:02:57.267 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.267 LIB libspdk_accel_error.a 00:02:57.267 SO libspdk_accel_ioat.so.6.0 00:02:57.267 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.267 SO libspdk_accel_iaa.so.3.0 00:02:57.267 SO libspdk_accel_error.so.2.0 00:02:57.267 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.267 SYMLINK libspdk_keyring_file.so 00:02:57.267 SYMLINK libspdk_keyring_linux.so 00:02:57.267 LIB libspdk_blob_bdev.a 00:02:57.267 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.267 LIB libspdk_accel_dsa.a 00:02:57.267 SYMLINK libspdk_accel_iaa.so 00:02:57.267 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.267 SO libspdk_blob_bdev.so.11.0 00:02:57.267 SYMLINK libspdk_accel_ioat.so 00:02:57.267 SYMLINK libspdk_accel_error.so 00:02:57.267 SO libspdk_accel_dsa.so.5.0 00:02:57.527 SYMLINK libspdk_blob_bdev.so 00:02:57.527 LIB libspdk_vfu_device.a 00:02:57.527 SO libspdk_vfu_device.so.3.0 00:02:57.527 SYMLINK libspdk_accel_dsa.so 00:02:57.527 SYMLINK libspdk_vfu_device.so 00:02:57.527 LIB libspdk_fsdev_aio.a 00:02:57.527 SO libspdk_fsdev_aio.so.1.0 00:02:57.527 LIB libspdk_sock_posix.a 00:02:57.789 SYMLINK libspdk_fsdev_aio.so 00:02:57.789 SO libspdk_sock_posix.so.6.0 00:02:57.789 SYMLINK libspdk_sock_posix.so 00:02:58.049 CC module/bdev/error/vbdev_error.o 00:02:58.049 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.049 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.049 CC module/bdev/gpt/gpt.o 00:02:58.049 CC module/bdev/null/bdev_null.o 00:02:58.049 CC module/bdev/null/bdev_null_rpc.o 00:02:58.049 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.049 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.049 CC module/bdev/delay/vbdev_delay.o 00:02:58.049 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.049 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.049 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.049 CC module/bdev/raid/bdev_raid.o 00:02:58.049 CC module/bdev/nvme/bdev_nvme.o 00:02:58.049 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.049 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.049 CC module/bdev/nvme/nvme_rpc.o 00:02:58.049 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.049 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.049 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.049 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.049 CC module/bdev/ftl/bdev_ftl.o 00:02:58.049 CC module/bdev/nvme/vbdev_opal.o 00:02:58.049 CC module/bdev/raid/raid0.o 00:02:58.049 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:58.049 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.049 CC module/bdev/aio/bdev_aio.o 00:02:58.049 CC module/bdev/raid/raid1.o 00:02:58.049 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.049 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.049 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.049 CC module/bdev/raid/concat.o 00:02:58.049 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.049 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.049 CC module/bdev/split/vbdev_split.o 00:02:58.049 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.049 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.049 CC module/bdev/malloc/bdev_malloc.o 00:02:58.049 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.049 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.049 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.049 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.049 LIB libspdk_blobfs_bdev.a 00:02:58.308 SO libspdk_blobfs_bdev.so.6.0 00:02:58.308 LIB libspdk_bdev_null.a 00:02:58.308 SO libspdk_bdev_null.so.6.0 00:02:58.308 SYMLINK libspdk_blobfs_bdev.so 00:02:58.308 LIB libspdk_bdev_zone_block.a 00:02:58.308 LIB libspdk_bdev_error.a 00:02:58.308 LIB libspdk_bdev_split.a 00:02:58.308 LIB libspdk_bdev_gpt.a 00:02:58.308 SYMLINK libspdk_bdev_null.so 00:02:58.308 LIB libspdk_bdev_passthru.a 00:02:58.308 SO libspdk_bdev_zone_block.so.6.0 00:02:58.308 SO libspdk_bdev_gpt.so.6.0 00:02:58.308 SO libspdk_bdev_error.so.6.0 00:02:58.308 SO libspdk_bdev_passthru.so.6.0 00:02:58.308 SO libspdk_bdev_split.so.6.0 00:02:58.308 LIB libspdk_bdev_ftl.a 00:02:58.308 SO libspdk_bdev_ftl.so.6.0 00:02:58.308 SYMLINK libspdk_bdev_zone_block.so 00:02:58.308 SYMLINK libspdk_bdev_split.so 00:02:58.308 LIB libspdk_bdev_delay.a 00:02:58.308 LIB libspdk_bdev_aio.a 00:02:58.308 SYMLINK libspdk_bdev_error.so 00:02:58.308 SYMLINK libspdk_bdev_gpt.so 00:02:58.308 SYMLINK libspdk_bdev_passthru.so 00:02:58.308 LIB libspdk_bdev_iscsi.a 00:02:58.308 LIB libspdk_bdev_malloc.a 00:02:58.308 SO libspdk_bdev_aio.so.6.0 00:02:58.308 SO libspdk_bdev_delay.so.6.0 00:02:58.308 SO libspdk_bdev_iscsi.so.6.0 00:02:58.308 SYMLINK libspdk_bdev_ftl.so 00:02:58.308 SO libspdk_bdev_malloc.so.6.0 00:02:58.568 SYMLINK libspdk_bdev_aio.so 00:02:58.568 SYMLINK libspdk_bdev_delay.so 00:02:58.568 LIB libspdk_bdev_virtio.a 00:02:58.568 SYMLINK libspdk_bdev_iscsi.so 00:02:58.568 LIB libspdk_bdev_lvol.a 00:02:58.568 SYMLINK libspdk_bdev_malloc.so 00:02:58.568 SO libspdk_bdev_virtio.so.6.0 00:02:58.568 SO libspdk_bdev_lvol.so.6.0 00:02:58.568 SYMLINK libspdk_bdev_lvol.so 00:02:58.568 SYMLINK libspdk_bdev_virtio.so 00:02:58.829 LIB libspdk_bdev_raid.a 00:02:58.829 SO libspdk_bdev_raid.so.6.0 00:02:59.090 SYMLINK libspdk_bdev_raid.so 00:03:00.032 LIB libspdk_bdev_nvme.a 00:03:00.032 SO libspdk_bdev_nvme.so.7.0 00:03:00.032 SYMLINK libspdk_bdev_nvme.so 00:03:00.975 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.975 CC module/event/subsystems/vmd/vmd.o 00:03:00.975 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.975 CC module/event/subsystems/keyring/keyring.o 00:03:00.975 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.975 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.975 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.975 CC module/event/subsystems/sock/sock.o 00:03:00.975 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:00.975 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.975 LIB libspdk_event_vhost_blk.a 00:03:00.975 LIB libspdk_event_keyring.a 00:03:00.975 LIB libspdk_event_fsdev.a 00:03:00.975 LIB libspdk_event_sock.a 00:03:00.975 LIB libspdk_event_scheduler.a 00:03:00.975 LIB libspdk_event_vmd.a 00:03:00.975 LIB libspdk_event_vfu_tgt.a 00:03:00.975 LIB libspdk_event_iobuf.a 00:03:00.975 SO libspdk_event_keyring.so.1.0 00:03:00.975 SO libspdk_event_vhost_blk.so.3.0 00:03:00.975 SO libspdk_event_sock.so.5.0 00:03:00.975 SO libspdk_event_scheduler.so.4.0 00:03:00.975 SO libspdk_event_vfu_tgt.so.3.0 00:03:00.975 SO libspdk_event_fsdev.so.1.0 00:03:00.975 SO libspdk_event_vmd.so.6.0 00:03:00.975 SO libspdk_event_iobuf.so.3.0 00:03:01.237 SYMLINK libspdk_event_keyring.so 00:03:01.237 SYMLINK libspdk_event_vhost_blk.so 00:03:01.237 SYMLINK libspdk_event_sock.so 00:03:01.237 SYMLINK libspdk_event_scheduler.so 00:03:01.237 SYMLINK libspdk_event_vfu_tgt.so 00:03:01.237 SYMLINK libspdk_event_fsdev.so 00:03:01.237 SYMLINK libspdk_event_vmd.so 00:03:01.237 SYMLINK libspdk_event_iobuf.so 00:03:01.498 CC module/event/subsystems/accel/accel.o 00:03:01.758 LIB libspdk_event_accel.a 00:03:01.759 SO libspdk_event_accel.so.6.0 00:03:01.759 SYMLINK libspdk_event_accel.so 00:03:02.019 CC module/event/subsystems/bdev/bdev.o 00:03:02.280 LIB libspdk_event_bdev.a 00:03:02.280 SO libspdk_event_bdev.so.6.0 00:03:02.280 SYMLINK libspdk_event_bdev.so 00:03:02.854 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.854 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.854 CC module/event/subsystems/nbd/nbd.o 00:03:02.854 CC module/event/subsystems/ublk/ublk.o 00:03:02.854 CC module/event/subsystems/scsi/scsi.o 00:03:02.854 LIB libspdk_event_ublk.a 00:03:02.854 LIB libspdk_event_nbd.a 00:03:02.854 LIB libspdk_event_scsi.a 00:03:02.854 SO libspdk_event_ublk.so.3.0 00:03:02.854 SO libspdk_event_nbd.so.6.0 00:03:03.115 SO libspdk_event_scsi.so.6.0 00:03:03.115 LIB libspdk_event_nvmf.a 00:03:03.115 SYMLINK libspdk_event_ublk.so 00:03:03.115 SYMLINK libspdk_event_nbd.so 00:03:03.115 SO libspdk_event_nvmf.so.6.0 00:03:03.115 SYMLINK libspdk_event_scsi.so 00:03:03.115 SYMLINK libspdk_event_nvmf.so 00:03:03.376 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.376 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.638 LIB libspdk_event_vhost_scsi.a 00:03:03.638 LIB libspdk_event_iscsi.a 00:03:03.638 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.638 SO libspdk_event_iscsi.so.6.0 00:03:03.638 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.638 SYMLINK libspdk_event_iscsi.so 00:03:03.899 SO libspdk.so.6.0 00:03:03.899 SYMLINK libspdk.so 00:03:04.160 CC test/rpc_client/rpc_client_test.o 00:03:04.160 CXX app/trace/trace.o 00:03:04.423 TEST_HEADER include/spdk/accel_module.h 00:03:04.423 TEST_HEADER include/spdk/accel.h 00:03:04.423 TEST_HEADER include/spdk/assert.h 00:03:04.423 TEST_HEADER include/spdk/barrier.h 00:03:04.423 TEST_HEADER include/spdk/base64.h 00:03:04.423 TEST_HEADER include/spdk/bdev.h 00:03:04.423 TEST_HEADER include/spdk/bdev_module.h 00:03:04.423 CC app/trace_record/trace_record.o 00:03:04.423 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.423 TEST_HEADER include/spdk/bit_array.h 00:03:04.423 TEST_HEADER include/spdk/bit_pool.h 00:03:04.423 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.423 CC app/spdk_top/spdk_top.o 00:03:04.423 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.423 TEST_HEADER include/spdk/blobfs.h 00:03:04.423 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.423 CC app/spdk_lspci/spdk_lspci.o 00:03:04.423 TEST_HEADER include/spdk/conf.h 00:03:04.423 TEST_HEADER include/spdk/blob.h 00:03:04.423 TEST_HEADER include/spdk/config.h 00:03:04.423 TEST_HEADER include/spdk/cpuset.h 00:03:04.423 CC app/spdk_nvme_perf/perf.o 00:03:04.423 TEST_HEADER include/spdk/crc16.h 00:03:04.423 TEST_HEADER include/spdk/crc32.h 00:03:04.423 TEST_HEADER include/spdk/crc64.h 00:03:04.423 CC app/spdk_nvme_identify/identify.o 00:03:04.423 TEST_HEADER include/spdk/dif.h 00:03:04.423 TEST_HEADER include/spdk/dma.h 00:03:04.423 TEST_HEADER include/spdk/endian.h 00:03:04.423 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.423 TEST_HEADER include/spdk/env.h 00:03:04.423 TEST_HEADER include/spdk/event.h 00:03:04.423 TEST_HEADER include/spdk/fd_group.h 00:03:04.423 TEST_HEADER include/spdk/fd.h 00:03:04.423 TEST_HEADER include/spdk/file.h 00:03:04.423 TEST_HEADER include/spdk/fsdev.h 00:03:04.423 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.423 TEST_HEADER include/spdk/ftl.h 00:03:04.423 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.423 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.423 TEST_HEADER include/spdk/hexlify.h 00:03:04.423 TEST_HEADER include/spdk/histogram_data.h 00:03:04.423 TEST_HEADER include/spdk/idxd.h 00:03:04.423 TEST_HEADER include/spdk/init.h 00:03:04.423 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.423 TEST_HEADER include/spdk/ioat.h 00:03:04.423 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.423 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.423 TEST_HEADER include/spdk/json.h 00:03:04.423 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.423 TEST_HEADER include/spdk/keyring.h 00:03:04.423 CC app/nvmf_tgt/nvmf_main.o 00:03:04.423 TEST_HEADER include/spdk/keyring_module.h 00:03:04.423 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.423 TEST_HEADER include/spdk/likely.h 00:03:04.423 TEST_HEADER include/spdk/log.h 00:03:04.423 TEST_HEADER include/spdk/lvol.h 00:03:04.423 TEST_HEADER include/spdk/md5.h 00:03:04.423 TEST_HEADER include/spdk/mmio.h 00:03:04.423 TEST_HEADER include/spdk/memory.h 00:03:04.423 TEST_HEADER include/spdk/nbd.h 00:03:04.423 TEST_HEADER include/spdk/net.h 00:03:04.423 CC app/spdk_dd/spdk_dd.o 00:03:04.423 TEST_HEADER include/spdk/nvme.h 00:03:04.423 TEST_HEADER include/spdk/notify.h 00:03:04.423 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.423 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.423 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.423 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.423 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.423 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.423 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.423 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.423 TEST_HEADER include/spdk/nvmf.h 00:03:04.423 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.423 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.423 TEST_HEADER include/spdk/pci_ids.h 00:03:04.423 TEST_HEADER include/spdk/opal.h 00:03:04.423 TEST_HEADER include/spdk/opal_spec.h 00:03:04.423 TEST_HEADER include/spdk/reduce.h 00:03:04.423 TEST_HEADER include/spdk/pipe.h 00:03:04.423 TEST_HEADER include/spdk/queue.h 00:03:04.423 TEST_HEADER include/spdk/rpc.h 00:03:04.423 CC app/spdk_tgt/spdk_tgt.o 00:03:04.423 TEST_HEADER include/spdk/scheduler.h 00:03:04.423 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.423 TEST_HEADER include/spdk/scsi.h 00:03:04.423 TEST_HEADER include/spdk/stdinc.h 00:03:04.423 TEST_HEADER include/spdk/sock.h 00:03:04.423 TEST_HEADER include/spdk/string.h 00:03:04.423 TEST_HEADER include/spdk/thread.h 00:03:04.423 TEST_HEADER include/spdk/trace.h 00:03:04.423 TEST_HEADER include/spdk/trace_parser.h 00:03:04.423 TEST_HEADER include/spdk/tree.h 00:03:04.423 TEST_HEADER include/spdk/ublk.h 00:03:04.423 TEST_HEADER include/spdk/util.h 00:03:04.423 TEST_HEADER include/spdk/uuid.h 00:03:04.423 TEST_HEADER include/spdk/version.h 00:03:04.423 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.423 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.423 TEST_HEADER include/spdk/vmd.h 00:03:04.423 TEST_HEADER include/spdk/vhost.h 00:03:04.423 TEST_HEADER include/spdk/xor.h 00:03:04.423 TEST_HEADER include/spdk/zipf.h 00:03:04.423 CXX test/cpp_headers/accel.o 00:03:04.423 CXX test/cpp_headers/accel_module.o 00:03:04.423 CXX test/cpp_headers/assert.o 00:03:04.423 CXX test/cpp_headers/barrier.o 00:03:04.423 CXX test/cpp_headers/base64.o 00:03:04.423 CXX test/cpp_headers/bdev.o 00:03:04.423 CXX test/cpp_headers/bdev_module.o 00:03:04.423 CXX test/cpp_headers/bdev_zone.o 00:03:04.423 CXX test/cpp_headers/bit_array.o 00:03:04.423 CXX test/cpp_headers/bit_pool.o 00:03:04.423 CXX test/cpp_headers/blob_bdev.o 00:03:04.423 CXX test/cpp_headers/blobfs.o 00:03:04.423 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.423 CXX test/cpp_headers/blob.o 00:03:04.423 CXX test/cpp_headers/conf.o 00:03:04.423 CXX test/cpp_headers/config.o 00:03:04.423 CXX test/cpp_headers/crc16.o 00:03:04.423 CXX test/cpp_headers/cpuset.o 00:03:04.423 CXX test/cpp_headers/dif.o 00:03:04.423 CXX test/cpp_headers/crc64.o 00:03:04.423 CXX test/cpp_headers/crc32.o 00:03:04.423 CXX test/cpp_headers/dma.o 00:03:04.423 CXX test/cpp_headers/endian.o 00:03:04.423 CXX test/cpp_headers/env_dpdk.o 00:03:04.423 CXX test/cpp_headers/env.o 00:03:04.423 CXX test/cpp_headers/event.o 00:03:04.423 CXX test/cpp_headers/fd_group.o 00:03:04.423 CXX test/cpp_headers/fd.o 00:03:04.423 CXX test/cpp_headers/fsdev_module.o 00:03:04.423 CXX test/cpp_headers/file.o 00:03:04.423 CXX test/cpp_headers/fsdev.o 00:03:04.423 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.423 CXX test/cpp_headers/ftl.o 00:03:04.423 CXX test/cpp_headers/gpt_spec.o 00:03:04.423 CXX test/cpp_headers/hexlify.o 00:03:04.423 CXX test/cpp_headers/histogram_data.o 00:03:04.423 CXX test/cpp_headers/idxd.o 00:03:04.423 CXX test/cpp_headers/ioat.o 00:03:04.423 CXX test/cpp_headers/idxd_spec.o 00:03:04.423 CXX test/cpp_headers/ioat_spec.o 00:03:04.423 CXX test/cpp_headers/init.o 00:03:04.423 CXX test/cpp_headers/jsonrpc.o 00:03:04.423 CXX test/cpp_headers/iscsi_spec.o 00:03:04.423 CXX test/cpp_headers/json.o 00:03:04.423 CXX test/cpp_headers/keyring.o 00:03:04.423 CXX test/cpp_headers/likely.o 00:03:04.423 CXX test/cpp_headers/keyring_module.o 00:03:04.423 CXX test/cpp_headers/lvol.o 00:03:04.423 CXX test/cpp_headers/log.o 00:03:04.423 CXX test/cpp_headers/memory.o 00:03:04.423 CXX test/cpp_headers/md5.o 00:03:04.423 CXX test/cpp_headers/mmio.o 00:03:04.423 CC test/thread/poller_perf/poller_perf.o 00:03:04.423 CXX test/cpp_headers/net.o 00:03:04.423 CXX test/cpp_headers/nbd.o 00:03:04.423 CXX test/cpp_headers/notify.o 00:03:04.423 CXX test/cpp_headers/nvme.o 00:03:04.423 CXX test/cpp_headers/nvme_spec.o 00:03:04.423 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.423 CXX test/cpp_headers/nvme_intel.o 00:03:04.423 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.423 CXX test/cpp_headers/nvme_zns.o 00:03:04.423 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.423 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.423 CXX test/cpp_headers/nvmf.o 00:03:04.423 CXX test/cpp_headers/nvmf_spec.o 00:03:04.423 CXX test/cpp_headers/nvmf_transport.o 00:03:04.423 CXX test/cpp_headers/opal_spec.o 00:03:04.423 CXX test/cpp_headers/opal.o 00:03:04.423 CC examples/ioat/perf/perf.o 00:03:04.423 CXX test/cpp_headers/pci_ids.o 00:03:04.423 CXX test/cpp_headers/queue.o 00:03:04.423 CC test/app/jsoncat/jsoncat.o 00:03:04.423 CXX test/cpp_headers/pipe.o 00:03:04.423 CC test/app/stub/stub.o 00:03:04.423 CC examples/ioat/verify/verify.o 00:03:04.423 CXX test/cpp_headers/rpc.o 00:03:04.423 CXX test/cpp_headers/scsi_spec.o 00:03:04.423 CXX test/cpp_headers/reduce.o 00:03:04.423 CXX test/cpp_headers/scheduler.o 00:03:04.423 CXX test/cpp_headers/scsi.o 00:03:04.423 CXX test/cpp_headers/string.o 00:03:04.423 CXX test/cpp_headers/sock.o 00:03:04.423 CXX test/cpp_headers/stdinc.o 00:03:04.687 CXX test/cpp_headers/thread.o 00:03:04.687 CXX test/cpp_headers/trace.o 00:03:04.687 CC test/app/histogram_perf/histogram_perf.o 00:03:04.687 CXX test/cpp_headers/trace_parser.o 00:03:04.687 CXX test/cpp_headers/tree.o 00:03:04.687 CXX test/cpp_headers/ublk.o 00:03:04.687 CXX test/cpp_headers/uuid.o 00:03:04.687 CXX test/cpp_headers/util.o 00:03:04.687 CXX test/cpp_headers/version.o 00:03:04.687 CC test/env/pci/pci_ut.o 00:03:04.687 CXX test/cpp_headers/vhost.o 00:03:04.687 CC test/env/vtophys/vtophys.o 00:03:04.687 CXX test/cpp_headers/vfio_user_pci.o 00:03:04.687 CXX test/cpp_headers/vfio_user_spec.o 00:03:04.687 CXX test/cpp_headers/vmd.o 00:03:04.687 CXX test/cpp_headers/zipf.o 00:03:04.687 CXX test/cpp_headers/xor.o 00:03:04.687 CC test/env/memory/memory_ut.o 00:03:04.687 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.687 LINK spdk_lspci 00:03:04.687 CC test/dma/test_dma/test_dma.o 00:03:04.687 CC examples/util/zipf/zipf.o 00:03:04.687 CC app/fio/nvme/fio_plugin.o 00:03:04.687 CC test/app/bdev_svc/bdev_svc.o 00:03:04.687 LINK rpc_client_test 00:03:04.687 CC app/fio/bdev/fio_plugin.o 00:03:04.687 LINK spdk_nvme_discover 00:03:04.687 LINK nvmf_tgt 00:03:04.947 LINK interrupt_tgt 00:03:04.947 LINK spdk_trace_record 00:03:04.947 LINK spdk_tgt 00:03:04.947 LINK iscsi_tgt 00:03:04.947 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.947 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.947 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.947 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.947 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.947 LINK spdk_trace 00:03:04.947 LINK stub 00:03:05.206 LINK poller_perf 00:03:05.206 LINK jsoncat 00:03:05.206 LINK vtophys 00:03:05.206 LINK histogram_perf 00:03:05.206 LINK spdk_dd 00:03:05.206 LINK bdev_svc 00:03:05.206 LINK zipf 00:03:05.206 LINK env_dpdk_post_init 00:03:05.206 LINK verify 00:03:05.206 LINK ioat_perf 00:03:05.465 CC app/vhost/vhost.o 00:03:05.465 LINK vhost_fuzz 00:03:05.465 LINK pci_ut 00:03:05.465 LINK spdk_nvme_identify 00:03:05.465 CC test/event/reactor/reactor.o 00:03:05.465 CC test/event/event_perf/event_perf.o 00:03:05.465 CC test/event/reactor_perf/reactor_perf.o 00:03:05.465 LINK test_dma 00:03:05.465 LINK spdk_bdev 00:03:05.465 LINK nvme_fuzz 00:03:05.465 CC test/event/app_repeat/app_repeat.o 00:03:05.465 LINK spdk_nvme 00:03:05.465 CC test/event/scheduler/scheduler.o 00:03:05.725 LINK vhost 00:03:05.725 LINK spdk_nvme_perf 00:03:05.725 LINK spdk_top 00:03:05.725 CC examples/idxd/perf/perf.o 00:03:05.725 CC examples/vmd/led/led.o 00:03:05.725 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.725 CC examples/sock/hello_world/hello_sock.o 00:03:05.725 LINK reactor_perf 00:03:05.725 LINK event_perf 00:03:05.725 LINK mem_callbacks 00:03:05.725 LINK reactor 00:03:05.725 CC examples/thread/thread/thread_ex.o 00:03:05.725 LINK app_repeat 00:03:05.725 LINK scheduler 00:03:05.984 LINK led 00:03:05.984 LINK lsvmd 00:03:05.984 LINK hello_sock 00:03:05.984 LINK idxd_perf 00:03:05.984 LINK thread 00:03:05.984 CC test/nvme/reserve/reserve.o 00:03:05.984 LINK memory_ut 00:03:05.984 CC test/nvme/compliance/nvme_compliance.o 00:03:05.984 CC test/nvme/sgl/sgl.o 00:03:05.984 CC test/nvme/e2edp/nvme_dp.o 00:03:05.984 CC test/nvme/overhead/overhead.o 00:03:05.984 CC test/nvme/aer/aer.o 00:03:05.984 CC test/nvme/startup/startup.o 00:03:05.984 CC test/nvme/reset/reset.o 00:03:05.984 CC test/nvme/err_injection/err_injection.o 00:03:06.243 CC test/accel/dif/dif.o 00:03:06.243 CC test/nvme/simple_copy/simple_copy.o 00:03:06.243 CC test/nvme/fdp/fdp.o 00:03:06.243 CC test/nvme/boot_partition/boot_partition.o 00:03:06.243 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.243 CC test/nvme/cuse/cuse.o 00:03:06.243 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.243 CC test/nvme/connect_stress/connect_stress.o 00:03:06.243 CC test/blobfs/mkfs/mkfs.o 00:03:06.243 CC test/lvol/esnap/esnap.o 00:03:06.243 LINK boot_partition 00:03:06.243 LINK startup 00:03:06.243 LINK err_injection 00:03:06.243 LINK simple_copy 00:03:06.243 LINK reserve 00:03:06.243 LINK doorbell_aers 00:03:06.243 LINK connect_stress 00:03:06.243 LINK sgl 00:03:06.243 LINK fused_ordering 00:03:06.503 LINK mkfs 00:03:06.503 LINK aer 00:03:06.503 LINK nvme_compliance 00:03:06.503 LINK nvme_dp 00:03:06.503 CC examples/nvme/hello_world/hello_world.o 00:03:06.503 LINK reset 00:03:06.503 LINK overhead 00:03:06.503 CC examples/nvme/abort/abort.o 00:03:06.503 CC examples/nvme/arbitration/arbitration.o 00:03:06.503 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.503 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.503 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.503 CC examples/nvme/reconnect/reconnect.o 00:03:06.503 CC examples/nvme/hotplug/hotplug.o 00:03:06.503 LINK fdp 00:03:06.503 LINK iscsi_fuzz 00:03:06.503 CC examples/accel/perf/accel_perf.o 00:03:06.503 LINK pmr_persistence 00:03:06.503 CC examples/blob/cli/blobcli.o 00:03:06.503 CC examples/blob/hello_world/hello_blob.o 00:03:06.503 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.503 LINK cmb_copy 00:03:06.764 LINK hello_world 00:03:06.764 LINK hotplug 00:03:06.764 LINK dif 00:03:06.764 LINK abort 00:03:06.764 LINK arbitration 00:03:06.764 LINK reconnect 00:03:06.764 LINK nvme_manage 00:03:06.764 LINK hello_blob 00:03:07.026 LINK hello_fsdev 00:03:07.026 LINK accel_perf 00:03:07.026 LINK blobcli 00:03:07.286 LINK cuse 00:03:07.286 CC test/bdev/bdevio/bdevio.o 00:03:07.577 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.577 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.888 LINK bdevio 00:03:07.888 LINK hello_bdev 00:03:08.475 LINK bdevperf 00:03:09.047 CC examples/nvmf/nvmf/nvmf.o 00:03:09.309 LINK nvmf 00:03:10.739 LINK esnap 00:03:11.000 00:03:11.000 real 0m53.143s 00:03:11.000 user 6m17.761s 00:03:11.000 sys 3m5.077s 00:03:11.000 10:43:30 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:11.000 10:43:30 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.000 ************************************ 00:03:11.000 END TEST make 00:03:11.000 ************************************ 00:03:11.000 10:43:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.000 10:43:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.000 10:43:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.000 10:43:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 10:43:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.000 10:43:30 -- pm/common@44 -- $ pid=1490865 00:03:11.000 10:43:30 -- pm/common@50 -- $ kill -TERM 1490865 00:03:11.000 10:43:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 10:43:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.000 10:43:30 -- pm/common@44 -- $ pid=1490866 00:03:11.000 10:43:30 -- pm/common@50 -- $ kill -TERM 1490866 00:03:11.000 10:43:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 10:43:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.000 10:43:30 -- pm/common@44 -- $ pid=1490868 00:03:11.000 10:43:30 -- pm/common@50 -- $ kill -TERM 1490868 00:03:11.000 10:43:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.000 10:43:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.000 10:43:30 -- pm/common@44 -- $ pid=1490891 00:03:11.000 10:43:30 -- pm/common@50 -- $ sudo -E kill -TERM 1490891 00:03:11.000 10:43:30 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:11.000 10:43:30 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:11.000 10:43:30 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:11.263 10:43:31 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:11.263 10:43:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.264 10:43:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.264 10:43:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.264 10:43:31 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.264 10:43:31 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.264 10:43:31 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.264 10:43:31 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.264 10:43:31 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.264 10:43:31 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.264 10:43:31 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.264 10:43:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.264 10:43:31 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.264 10:43:31 -- scripts/common.sh@345 -- # : 1 00:03:11.264 10:43:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.264 10:43:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.264 10:43:31 -- scripts/common.sh@365 -- # decimal 1 00:03:11.264 10:43:31 -- scripts/common.sh@353 -- # local d=1 00:03:11.264 10:43:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.264 10:43:31 -- scripts/common.sh@355 -- # echo 1 00:03:11.264 10:43:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.264 10:43:31 -- scripts/common.sh@366 -- # decimal 2 00:03:11.264 10:43:31 -- scripts/common.sh@353 -- # local d=2 00:03:11.264 10:43:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.264 10:43:31 -- scripts/common.sh@355 -- # echo 2 00:03:11.264 10:43:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.264 10:43:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.264 10:43:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.264 10:43:31 -- scripts/common.sh@368 -- # return 0 00:03:11.264 10:43:31 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.264 10:43:31 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:11.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.264 --rc genhtml_branch_coverage=1 00:03:11.264 --rc genhtml_function_coverage=1 00:03:11.264 --rc genhtml_legend=1 00:03:11.264 --rc geninfo_all_blocks=1 00:03:11.264 --rc geninfo_unexecuted_blocks=1 00:03:11.264 00:03:11.264 ' 00:03:11.264 10:43:31 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:11.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.264 --rc genhtml_branch_coverage=1 00:03:11.264 --rc genhtml_function_coverage=1 00:03:11.264 --rc genhtml_legend=1 00:03:11.264 --rc geninfo_all_blocks=1 00:03:11.264 --rc geninfo_unexecuted_blocks=1 00:03:11.264 00:03:11.264 ' 00:03:11.264 10:43:31 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:11.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.264 --rc genhtml_branch_coverage=1 00:03:11.264 --rc genhtml_function_coverage=1 00:03:11.264 --rc genhtml_legend=1 00:03:11.264 --rc geninfo_all_blocks=1 00:03:11.264 --rc geninfo_unexecuted_blocks=1 00:03:11.264 00:03:11.264 ' 00:03:11.264 10:43:31 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:11.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.264 --rc genhtml_branch_coverage=1 00:03:11.264 --rc genhtml_function_coverage=1 00:03:11.264 --rc genhtml_legend=1 00:03:11.264 --rc geninfo_all_blocks=1 00:03:11.264 --rc geninfo_unexecuted_blocks=1 00:03:11.264 00:03:11.264 ' 00:03:11.264 10:43:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.264 10:43:31 -- nvmf/common.sh@7 -- # uname -s 00:03:11.264 10:43:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.264 10:43:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.264 10:43:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.264 10:43:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.264 10:43:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.264 10:43:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.264 10:43:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.264 10:43:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.264 10:43:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.264 10:43:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.264 10:43:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.264 10:43:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:11.264 10:43:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.264 10:43:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.264 10:43:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.264 10:43:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.264 10:43:31 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.264 10:43:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.264 10:43:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.264 10:43:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.264 10:43:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.264 10:43:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.264 10:43:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.264 10:43:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.264 10:43:31 -- paths/export.sh@5 -- # export PATH 00:03:11.264 10:43:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.264 10:43:31 -- nvmf/common.sh@51 -- # : 0 00:03:11.264 10:43:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.264 10:43:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.264 10:43:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.264 10:43:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.264 10:43:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.264 10:43:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.264 10:43:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.264 10:43:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.264 10:43:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.264 10:43:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.264 10:43:31 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.264 10:43:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.264 10:43:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.264 10:43:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.264 10:43:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.264 10:43:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.264 10:43:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.264 10:43:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.264 10:43:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.264 10:43:31 -- spdk/autotest.sh@48 -- # udevadm_pid=1573531 00:03:11.264 10:43:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.264 10:43:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.264 10:43:31 -- pm/common@17 -- # local monitor 00:03:11.264 10:43:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.264 10:43:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.264 10:43:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.264 10:43:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.264 10:43:31 -- pm/common@21 -- # date +%s 00:03:11.264 10:43:31 -- pm/common@21 -- # date +%s 00:03:11.264 10:43:31 -- pm/common@25 -- # sleep 1 00:03:11.264 10:43:31 -- pm/common@21 -- # date +%s 00:03:11.264 10:43:31 -- pm/common@21 -- # date +%s 00:03:11.264 10:43:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728463411 00:03:11.264 10:43:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728463411 00:03:11.264 10:43:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728463411 00:03:11.265 10:43:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728463411 00:03:11.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728463411_collect-cpu-load.pm.log 00:03:11.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728463411_collect-vmstat.pm.log 00:03:11.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728463411_collect-cpu-temp.pm.log 00:03:11.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728463411_collect-bmc-pm.bmc.pm.log 00:03:12.207 10:43:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.207 10:43:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.207 10:43:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:12.207 10:43:32 -- common/autotest_common.sh@10 -- # set +x 00:03:12.207 10:43:32 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.207 10:43:32 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:12.207 10:43:32 -- common/autotest_common.sh@10 -- # set +x 00:03:12.207 10:43:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:12.207 10:43:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.207 10:43:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.207 10:43:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:12.207 10:43:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.207 10:43:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.207 10:43:32 -- common/autotest_common.sh@1455 -- # uname 00:03:12.207 10:43:32 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:12.207 10:43:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.207 10:43:32 -- common/autotest_common.sh@1475 -- # uname 00:03:12.207 10:43:32 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:12.207 10:43:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.207 10:43:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.469 lcov: LCOV version 1.15 00:03:12.469 10:43:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:27.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.511 10:44:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:45.511 10:44:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:45.511 10:44:02 -- common/autotest_common.sh@10 -- # set +x 00:03:45.511 10:44:02 -- spdk/autotest.sh@78 -- # rm -f 00:03:45.511 10:44:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.463 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:46.463 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:46.463 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:46.463 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:46.463 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:46.725 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:46.725 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:46.986 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:47.246 10:44:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:47.246 10:44:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:47.246 10:44:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:47.246 10:44:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:47.246 10:44:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.246 10:44:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:47.246 10:44:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:47.246 10:44:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.246 10:44:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.246 10:44:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:47.246 10:44:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.246 10:44:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.246 10:44:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:47.246 10:44:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:47.246 10:44:07 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.246 No valid GPT data, bailing 00:03:47.246 10:44:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.246 10:44:07 -- scripts/common.sh@394 -- # pt= 00:03:47.246 10:44:07 -- scripts/common.sh@395 -- # return 1 00:03:47.246 10:44:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.246 1+0 records in 00:03:47.246 1+0 records out 00:03:47.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441491 s, 238 MB/s 00:03:47.246 10:44:07 -- spdk/autotest.sh@105 -- # sync 00:03:47.246 10:44:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.246 10:44:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.246 10:44:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.255 10:44:15 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.255 10:44:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.255 10:44:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.255 10:44:15 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:59.165 Hugepages 00:03:59.165 node hugesize free / total 00:03:59.165 node0 1048576kB 0 / 0 00:03:59.165 node0 2048kB 0 / 0 00:03:59.165 node1 1048576kB 0 / 0 00:03:59.165 node1 2048kB 0 / 0 00:03:59.165 00:03:59.165 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.165 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:59.165 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:59.165 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:59.165 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:59.165 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:59.165 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:59.165 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:59.165 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:59.426 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:59.426 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:59.426 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:59.426 10:44:19 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.426 10:44:19 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.426 10:44:19 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.426 10:44:19 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.727 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:02.727 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:02.988 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:04.899 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:04.899 10:44:24 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:06.281 10:44:25 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:06.281 10:44:25 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:06.281 10:44:25 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.281 10:44:25 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:06.281 10:44:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:06.281 10:44:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:06.281 10:44:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.282 10:44:25 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.282 10:44:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:06.282 10:44:25 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:06.282 10:44:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:06.282 10:44:25 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.701 Waiting for block devices as requested 00:04:09.701 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:09.701 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:09.961 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:09.961 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:10.222 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:10.222 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:10.222 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:10.222 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:10.482 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:10.482 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:10.743 10:44:30 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:10.743 10:44:30 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:10.743 10:44:30 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:10.743 10:44:30 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:10.743 10:44:30 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:10.743 10:44:30 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:10.743 10:44:30 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:10.743 10:44:30 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:10.743 10:44:30 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:10.743 10:44:30 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:10.743 10:44:30 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:10.743 10:44:30 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:10.743 10:44:30 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:10.743 10:44:30 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:10.743 10:44:30 -- common/autotest_common.sh@1541 -- # continue 00:04:10.743 10:44:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:10.743 10:44:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:10.743 10:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.743 10:44:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:10.743 10:44:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:10.743 10:44:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.743 10:44:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:14.948 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:14.948 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:14.948 10:44:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:14.948 10:44:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.948 10:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:14.948 10:44:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:14.948 10:44:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:14.948 10:44:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.948 10:44:34 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:14.948 10:44:34 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:14.948 10:44:34 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:14.948 10:44:34 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:14.948 10:44:34 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:14.948 10:44:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:14.948 10:44:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:14.948 10:44:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.948 10:44:34 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.948 10:44:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:14.948 10:44:34 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:14.948 10:44:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:14.948 10:44:34 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:14.948 10:44:34 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:14.948 10:44:34 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:14.948 10:44:34 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:14.949 10:44:34 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:14.949 10:44:34 -- common/autotest_common.sh@1570 -- # return 0 00:04:14.949 10:44:34 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:14.949 10:44:34 -- common/autotest_common.sh@1578 -- # return 0 00:04:14.949 10:44:34 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:14.949 10:44:34 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:14.949 10:44:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.949 10:44:34 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:14.949 10:44:34 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:14.949 10:44:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.949 10:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:14.949 10:44:34 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:14.949 10:44:34 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:14.949 10:44:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.949 10:44:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.949 10:44:34 -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 ************************************ 00:04:15.210 START TEST env 00:04:15.210 ************************************ 00:04:15.210 10:44:34 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:15.210 * Looking for test storage... 00:04:15.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.210 10:44:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.210 10:44:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.210 10:44:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.210 10:44:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.210 10:44:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.210 10:44:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.210 10:44:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.210 10:44:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.210 10:44:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.210 10:44:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.210 10:44:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.210 10:44:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.210 10:44:35 env -- scripts/common.sh@345 -- # : 1 00:04:15.210 10:44:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.210 10:44:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.210 10:44:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.210 10:44:35 env -- scripts/common.sh@353 -- # local d=1 00:04:15.210 10:44:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.210 10:44:35 env -- scripts/common.sh@355 -- # echo 1 00:04:15.210 10:44:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.210 10:44:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.210 10:44:35 env -- scripts/common.sh@353 -- # local d=2 00:04:15.210 10:44:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.210 10:44:35 env -- scripts/common.sh@355 -- # echo 2 00:04:15.210 10:44:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.210 10:44:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.210 10:44:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.210 10:44:35 env -- scripts/common.sh@368 -- # return 0 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:15.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.210 --rc genhtml_branch_coverage=1 00:04:15.210 --rc genhtml_function_coverage=1 00:04:15.210 --rc genhtml_legend=1 00:04:15.210 --rc geninfo_all_blocks=1 00:04:15.210 --rc geninfo_unexecuted_blocks=1 00:04:15.210 00:04:15.210 ' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:15.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.210 --rc genhtml_branch_coverage=1 00:04:15.210 --rc genhtml_function_coverage=1 00:04:15.210 --rc genhtml_legend=1 00:04:15.210 --rc geninfo_all_blocks=1 00:04:15.210 --rc geninfo_unexecuted_blocks=1 00:04:15.210 00:04:15.210 ' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:15.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.210 --rc genhtml_branch_coverage=1 00:04:15.210 --rc genhtml_function_coverage=1 00:04:15.210 --rc genhtml_legend=1 00:04:15.210 --rc geninfo_all_blocks=1 00:04:15.210 --rc geninfo_unexecuted_blocks=1 00:04:15.210 00:04:15.210 ' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:15.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.210 --rc genhtml_branch_coverage=1 00:04:15.210 --rc genhtml_function_coverage=1 00:04:15.210 --rc genhtml_legend=1 00:04:15.210 --rc geninfo_all_blocks=1 00:04:15.210 --rc geninfo_unexecuted_blocks=1 00:04:15.210 00:04:15.210 ' 00:04:15.210 10:44:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.210 10:44:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.210 10:44:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.210 ************************************ 00:04:15.210 START TEST env_memory 00:04:15.210 ************************************ 00:04:15.210 10:44:35 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:15.471 00:04:15.471 00:04:15.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.471 http://cunit.sourceforge.net/ 00:04:15.471 00:04:15.471 00:04:15.471 Suite: memory 00:04:15.471 Test: alloc and free memory map ...[2024-10-09 10:44:35.253013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.471 passed 00:04:15.471 Test: mem map translation ...[2024-10-09 10:44:35.278791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.471 [2024-10-09 10:44:35.278818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.471 [2024-10-09 10:44:35.278865] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.471 [2024-10-09 10:44:35.278872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.471 passed 00:04:15.471 Test: mem map registration ...[2024-10-09 10:44:35.334271] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.471 [2024-10-09 10:44:35.334292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.471 passed 00:04:15.471 Test: mem map adjacent registrations ...passed 00:04:15.471 00:04:15.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.471 suites 1 1 n/a 0 0 00:04:15.471 tests 4 4 4 0 0 00:04:15.471 asserts 152 152 152 0 n/a 00:04:15.471 00:04:15.471 Elapsed time = 0.196 seconds 00:04:15.471 00:04:15.471 real 0m0.211s 00:04:15.471 user 0m0.197s 00:04:15.471 sys 0m0.013s 00:04:15.471 10:44:35 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.471 10:44:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.471 ************************************ 00:04:15.471 END TEST env_memory 00:04:15.471 ************************************ 00:04:15.471 10:44:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.471 10:44:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.471 10:44:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.471 10:44:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.732 ************************************ 00:04:15.732 START TEST env_vtophys 00:04:15.732 ************************************ 00:04:15.732 10:44:35 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:15.732 EAL: lib.eal log level changed from notice to debug 00:04:15.732 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.732 EAL: Detected lcore 1 as core 1 on socket 0 00:04:15.732 EAL: Detected lcore 2 as core 2 on socket 0 00:04:15.732 EAL: Detected lcore 3 as core 3 on socket 0 00:04:15.732 EAL: Detected lcore 4 as core 4 on socket 0 00:04:15.732 EAL: Detected lcore 5 as core 5 on socket 0 00:04:15.732 EAL: Detected lcore 6 as core 6 on socket 0 00:04:15.732 EAL: Detected lcore 7 as core 7 on socket 0 00:04:15.732 EAL: Detected lcore 8 as core 8 on socket 0 00:04:15.732 EAL: Detected lcore 9 as core 9 on socket 0 00:04:15.732 EAL: Detected lcore 10 as core 10 on socket 0 00:04:15.732 EAL: Detected lcore 11 as core 11 on socket 0 00:04:15.732 EAL: Detected lcore 12 as core 12 on socket 0 00:04:15.732 EAL: Detected lcore 13 as core 13 on socket 0 00:04:15.732 EAL: Detected lcore 14 as core 14 on socket 0 00:04:15.732 EAL: Detected lcore 15 as core 15 on socket 0 00:04:15.732 EAL: Detected lcore 16 as core 16 on socket 0 00:04:15.732 EAL: Detected lcore 17 as core 17 on socket 0 00:04:15.732 EAL: Detected lcore 18 as core 18 on socket 0 00:04:15.732 EAL: Detected lcore 19 as core 19 on socket 0 00:04:15.732 EAL: Detected lcore 20 as core 20 on socket 0 00:04:15.732 EAL: Detected lcore 21 as core 21 on socket 0 00:04:15.732 EAL: Detected lcore 22 as core 22 on socket 0 00:04:15.732 EAL: Detected lcore 23 as core 23 on socket 0 00:04:15.732 EAL: Detected lcore 24 as core 24 on socket 0 00:04:15.732 EAL: Detected lcore 25 as core 25 on socket 0 00:04:15.732 EAL: Detected lcore 26 as core 26 on socket 0 00:04:15.732 EAL: Detected lcore 27 as core 27 on socket 0 00:04:15.732 EAL: Detected lcore 28 as core 28 on socket 0 00:04:15.732 EAL: Detected lcore 29 as core 29 on socket 0 00:04:15.732 EAL: Detected lcore 30 as core 30 on socket 0 00:04:15.732 EAL: Detected lcore 31 as core 31 on socket 0 00:04:15.732 EAL: Detected lcore 32 as core 32 on socket 0 00:04:15.732 EAL: Detected lcore 33 as core 33 on socket 0 00:04:15.732 EAL: Detected lcore 34 as core 34 on socket 0 00:04:15.732 EAL: Detected lcore 35 as core 35 on socket 0 00:04:15.732 EAL: Detected lcore 36 as core 0 on socket 1 00:04:15.732 EAL: Detected lcore 37 as core 1 on socket 1 00:04:15.732 EAL: Detected lcore 38 as core 2 on socket 1 00:04:15.732 EAL: Detected lcore 39 as core 3 on socket 1 00:04:15.732 EAL: Detected lcore 40 as core 4 on socket 1 00:04:15.732 EAL: Detected lcore 41 as core 5 on socket 1 00:04:15.732 EAL: Detected lcore 42 as core 6 on socket 1 00:04:15.732 EAL: Detected lcore 43 as core 7 on socket 1 00:04:15.732 EAL: Detected lcore 44 as core 8 on socket 1 00:04:15.732 EAL: Detected lcore 45 as core 9 on socket 1 00:04:15.732 EAL: Detected lcore 46 as core 10 on socket 1 00:04:15.732 EAL: Detected lcore 47 as core 11 on socket 1 00:04:15.732 EAL: Detected lcore 48 as core 12 on socket 1 00:04:15.732 EAL: Detected lcore 49 as core 13 on socket 1 00:04:15.732 EAL: Detected lcore 50 as core 14 on socket 1 00:04:15.732 EAL: Detected lcore 51 as core 15 on socket 1 00:04:15.732 EAL: Detected lcore 52 as core 16 on socket 1 00:04:15.732 EAL: Detected lcore 53 as core 17 on socket 1 00:04:15.732 EAL: Detected lcore 54 as core 18 on socket 1 00:04:15.732 EAL: Detected lcore 55 as core 19 on socket 1 00:04:15.732 EAL: Detected lcore 56 as core 20 on socket 1 00:04:15.732 EAL: Detected lcore 57 as core 21 on socket 1 00:04:15.732 EAL: Detected lcore 58 as core 22 on socket 1 00:04:15.732 EAL: Detected lcore 59 as core 23 on socket 1 00:04:15.732 EAL: Detected lcore 60 as core 24 on socket 1 00:04:15.732 EAL: Detected lcore 61 as core 25 on socket 1 00:04:15.732 EAL: Detected lcore 62 as core 26 on socket 1 00:04:15.732 EAL: Detected lcore 63 as core 27 on socket 1 00:04:15.732 EAL: Detected lcore 64 as core 28 on socket 1 00:04:15.732 EAL: Detected lcore 65 as core 29 on socket 1 00:04:15.732 EAL: Detected lcore 66 as core 30 on socket 1 00:04:15.732 EAL: Detected lcore 67 as core 31 on socket 1 00:04:15.733 EAL: Detected lcore 68 as core 32 on socket 1 00:04:15.733 EAL: Detected lcore 69 as core 33 on socket 1 00:04:15.733 EAL: Detected lcore 70 as core 34 on socket 1 00:04:15.733 EAL: Detected lcore 71 as core 35 on socket 1 00:04:15.733 EAL: Detected lcore 72 as core 0 on socket 0 00:04:15.733 EAL: Detected lcore 73 as core 1 on socket 0 00:04:15.733 EAL: Detected lcore 74 as core 2 on socket 0 00:04:15.733 EAL: Detected lcore 75 as core 3 on socket 0 00:04:15.733 EAL: Detected lcore 76 as core 4 on socket 0 00:04:15.733 EAL: Detected lcore 77 as core 5 on socket 0 00:04:15.733 EAL: Detected lcore 78 as core 6 on socket 0 00:04:15.733 EAL: Detected lcore 79 as core 7 on socket 0 00:04:15.733 EAL: Detected lcore 80 as core 8 on socket 0 00:04:15.733 EAL: Detected lcore 81 as core 9 on socket 0 00:04:15.733 EAL: Detected lcore 82 as core 10 on socket 0 00:04:15.733 EAL: Detected lcore 83 as core 11 on socket 0 00:04:15.733 EAL: Detected lcore 84 as core 12 on socket 0 00:04:15.733 EAL: Detected lcore 85 as core 13 on socket 0 00:04:15.733 EAL: Detected lcore 86 as core 14 on socket 0 00:04:15.733 EAL: Detected lcore 87 as core 15 on socket 0 00:04:15.733 EAL: Detected lcore 88 as core 16 on socket 0 00:04:15.733 EAL: Detected lcore 89 as core 17 on socket 0 00:04:15.733 EAL: Detected lcore 90 as core 18 on socket 0 00:04:15.733 EAL: Detected lcore 91 as core 19 on socket 0 00:04:15.733 EAL: Detected lcore 92 as core 20 on socket 0 00:04:15.733 EAL: Detected lcore 93 as core 21 on socket 0 00:04:15.733 EAL: Detected lcore 94 as core 22 on socket 0 00:04:15.733 EAL: Detected lcore 95 as core 23 on socket 0 00:04:15.733 EAL: Detected lcore 96 as core 24 on socket 0 00:04:15.733 EAL: Detected lcore 97 as core 25 on socket 0 00:04:15.733 EAL: Detected lcore 98 as core 26 on socket 0 00:04:15.733 EAL: Detected lcore 99 as core 27 on socket 0 00:04:15.733 EAL: Detected lcore 100 as core 28 on socket 0 00:04:15.733 EAL: Detected lcore 101 as core 29 on socket 0 00:04:15.733 EAL: Detected lcore 102 as core 30 on socket 0 00:04:15.733 EAL: Detected lcore 103 as core 31 on socket 0 00:04:15.733 EAL: Detected lcore 104 as core 32 on socket 0 00:04:15.733 EAL: Detected lcore 105 as core 33 on socket 0 00:04:15.733 EAL: Detected lcore 106 as core 34 on socket 0 00:04:15.733 EAL: Detected lcore 107 as core 35 on socket 0 00:04:15.733 EAL: Detected lcore 108 as core 0 on socket 1 00:04:15.733 EAL: Detected lcore 109 as core 1 on socket 1 00:04:15.733 EAL: Detected lcore 110 as core 2 on socket 1 00:04:15.733 EAL: Detected lcore 111 as core 3 on socket 1 00:04:15.733 EAL: Detected lcore 112 as core 4 on socket 1 00:04:15.733 EAL: Detected lcore 113 as core 5 on socket 1 00:04:15.733 EAL: Detected lcore 114 as core 6 on socket 1 00:04:15.733 EAL: Detected lcore 115 as core 7 on socket 1 00:04:15.733 EAL: Detected lcore 116 as core 8 on socket 1 00:04:15.733 EAL: Detected lcore 117 as core 9 on socket 1 00:04:15.733 EAL: Detected lcore 118 as core 10 on socket 1 00:04:15.733 EAL: Detected lcore 119 as core 11 on socket 1 00:04:15.733 EAL: Detected lcore 120 as core 12 on socket 1 00:04:15.733 EAL: Detected lcore 121 as core 13 on socket 1 00:04:15.733 EAL: Detected lcore 122 as core 14 on socket 1 00:04:15.733 EAL: Detected lcore 123 as core 15 on socket 1 00:04:15.733 EAL: Detected lcore 124 as core 16 on socket 1 00:04:15.733 EAL: Detected lcore 125 as core 17 on socket 1 00:04:15.733 EAL: Detected lcore 126 as core 18 on socket 1 00:04:15.733 EAL: Detected lcore 127 as core 19 on socket 1 00:04:15.733 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:15.733 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:15.733 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:15.733 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:15.733 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:15.733 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:15.733 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:15.733 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:15.733 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:15.733 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:15.733 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:15.733 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:15.733 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:15.733 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:15.733 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:15.733 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:15.733 EAL: Maximum logical cores by configuration: 128 00:04:15.733 EAL: Detected CPU lcores: 128 00:04:15.733 EAL: Detected NUMA nodes: 2 00:04:15.733 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:15.733 EAL: Detected shared linkage of DPDK 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:15.733 EAL: Registered [vdev] bus. 00:04:15.733 EAL: bus.vdev log level changed from disabled to notice 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:15.733 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:15.733 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:15.733 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:15.733 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: Bus pci wants IOVA as 'DC' 00:04:15.733 EAL: Bus vdev wants IOVA as 'DC' 00:04:15.733 EAL: Buses did not request a specific IOVA mode. 00:04:15.733 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:15.733 EAL: Selected IOVA mode 'VA' 00:04:15.733 EAL: Probing VFIO support... 00:04:15.733 EAL: IOMMU type 1 (Type 1) is supported 00:04:15.733 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:15.733 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:15.733 EAL: VFIO support initialized 00:04:15.733 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.733 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.733 EAL: Setting up physically contiguous memory... 00:04:15.733 EAL: Setting maximum number of open files to 524288 00:04:15.733 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.733 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:15.733 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.733 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:15.733 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.733 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:15.733 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:15.733 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.733 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:15.733 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:15.733 EAL: Hugepages will be freed exactly as allocated. 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: Refined arch frequency 2400000000 to measured frequency 2394369392 00:04:15.733 EAL: TSC frequency is ~2394400 KHz 00:04:15.733 EAL: Main lcore 0 is ready (tid=7fbe498fba00;cpuset=[0]) 00:04:15.733 EAL: Trying to obtain current memory policy. 00:04:15.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.733 EAL: Restoring previous memory policy: 0 00:04:15.733 EAL: request: mp_malloc_sync 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: Heap on socket 0 was expanded by 2MB 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.733 EAL: Mem event callback 'spdk:(nil)' registered 00:04:15.733 00:04:15.733 00:04:15.733 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.733 http://cunit.sourceforge.net/ 00:04:15.733 00:04:15.733 00:04:15.733 Suite: components_suite 00:04:15.733 Test: vtophys_malloc_test ...passed 00:04:15.733 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:15.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.733 EAL: Restoring previous memory policy: 4 00:04:15.733 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.733 EAL: request: mp_malloc_sync 00:04:15.733 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 4MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 4MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 6MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 6MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 10MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 10MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 18MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 18MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 34MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 34MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 66MB 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was shrunk by 66MB 00:04:15.734 EAL: Trying to obtain current memory policy. 00:04:15.734 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.734 EAL: Restoring previous memory policy: 4 00:04:15.734 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.734 EAL: request: mp_malloc_sync 00:04:15.734 EAL: No shared files mode enabled, IPC is disabled 00:04:15.734 EAL: Heap on socket 0 was expanded by 130MB 00:04:15.994 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.994 EAL: request: mp_malloc_sync 00:04:15.994 EAL: No shared files mode enabled, IPC is disabled 00:04:15.994 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.994 EAL: Trying to obtain current memory policy. 00:04:15.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.994 EAL: Restoring previous memory policy: 4 00:04:15.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.995 EAL: request: mp_malloc_sync 00:04:15.995 EAL: No shared files mode enabled, IPC is disabled 00:04:15.995 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.995 EAL: request: mp_malloc_sync 00:04:15.995 EAL: No shared files mode enabled, IPC is disabled 00:04:15.995 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.995 EAL: Trying to obtain current memory policy. 00:04:15.995 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.995 EAL: Restoring previous memory policy: 4 00:04:15.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.995 EAL: request: mp_malloc_sync 00:04:15.995 EAL: No shared files mode enabled, IPC is disabled 00:04:15.995 EAL: Heap on socket 0 was expanded by 514MB 00:04:15.995 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.254 EAL: request: mp_malloc_sync 00:04:16.254 EAL: No shared files mode enabled, IPC is disabled 00:04:16.254 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.254 EAL: Trying to obtain current memory policy. 00:04:16.255 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.255 EAL: Restoring previous memory policy: 4 00:04:16.255 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.255 EAL: request: mp_malloc_sync 00:04:16.255 EAL: No shared files mode enabled, IPC is disabled 00:04:16.255 EAL: Heap on socket 0 was expanded by 1026MB 00:04:16.255 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.514 EAL: request: mp_malloc_sync 00:04:16.514 EAL: No shared files mode enabled, IPC is disabled 00:04:16.514 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.514 passed 00:04:16.514 00:04:16.514 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.514 suites 1 1 n/a 0 0 00:04:16.514 tests 2 2 2 0 0 00:04:16.514 asserts 497 497 497 0 n/a 00:04:16.514 00:04:16.514 Elapsed time = 0.643 seconds 00:04:16.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.514 EAL: request: mp_malloc_sync 00:04:16.514 EAL: No shared files mode enabled, IPC is disabled 00:04:16.514 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.514 EAL: No shared files mode enabled, IPC is disabled 00:04:16.514 EAL: No shared files mode enabled, IPC is disabled 00:04:16.514 EAL: No shared files mode enabled, IPC is disabled 00:04:16.514 00:04:16.514 real 0m0.862s 00:04:16.514 user 0m0.393s 00:04:16.514 sys 0m0.343s 00:04:16.514 10:44:36 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.514 10:44:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.514 ************************************ 00:04:16.514 END TEST env_vtophys 00:04:16.514 ************************************ 00:04:16.514 10:44:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.514 10:44:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.514 10:44:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.514 10:44:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.514 ************************************ 00:04:16.514 START TEST env_pci 00:04:16.514 ************************************ 00:04:16.514 10:44:36 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.514 00:04:16.514 00:04:16.514 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.514 http://cunit.sourceforge.net/ 00:04:16.514 00:04:16.514 00:04:16.514 Suite: pci 00:04:16.514 Test: pci_hook ...[2024-10-09 10:44:36.451191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1592961 has claimed it 00:04:16.514 EAL: Cannot find device (10000:00:01.0) 00:04:16.514 EAL: Failed to attach device on primary process 00:04:16.514 passed 00:04:16.514 00:04:16.514 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.515 suites 1 1 n/a 0 0 00:04:16.515 tests 1 1 1 0 0 00:04:16.515 asserts 25 25 25 0 n/a 00:04:16.515 00:04:16.515 Elapsed time = 0.031 seconds 00:04:16.515 00:04:16.515 real 0m0.051s 00:04:16.515 user 0m0.019s 00:04:16.515 sys 0m0.032s 00:04:16.515 10:44:36 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.515 10:44:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.515 ************************************ 00:04:16.515 END TEST env_pci 00:04:16.515 ************************************ 00:04:16.776 10:44:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.776 10:44:36 env -- env/env.sh@15 -- # uname 00:04:16.776 10:44:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.776 10:44:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.776 10:44:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.776 10:44:36 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:16.776 10:44:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.776 10:44:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.776 ************************************ 00:04:16.776 START TEST env_dpdk_post_init 00:04:16.776 ************************************ 00:04:16.776 10:44:36 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.776 EAL: Detected CPU lcores: 128 00:04:16.776 EAL: Detected NUMA nodes: 2 00:04:16.776 EAL: Detected shared linkage of DPDK 00:04:16.776 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.776 EAL: Selected IOVA mode 'VA' 00:04:16.776 EAL: VFIO support initialized 00:04:17.036 EAL: Using IOMMU type 1 (Type 1) 00:04:21.237 Starting DPDK initialization... 00:04:21.237 Starting SPDK post initialization... 00:04:21.237 SPDK NVMe probe 00:04:21.237 Attaching to 0000:65:00.0 00:04:21.237 Attached to 0000:65:00.0 00:04:21.237 Cleaning up... 00:04:22.619 00:04:22.619 real 0m5.819s 00:04:22.619 user 0m0.087s 00:04:22.619 sys 0m0.172s 00:04:22.619 10:44:42 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.619 10:44:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.619 ************************************ 00:04:22.619 END TEST env_dpdk_post_init 00:04:22.619 ************************************ 00:04:22.619 10:44:42 env -- env/env.sh@26 -- # uname 00:04:22.619 10:44:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:22.619 10:44:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.619 10:44:42 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.619 10:44:42 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.619 10:44:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.619 ************************************ 00:04:22.619 START TEST env_mem_callbacks 00:04:22.619 ************************************ 00:04:22.619 10:44:42 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:22.619 EAL: Detected CPU lcores: 128 00:04:22.619 EAL: Detected NUMA nodes: 2 00:04:22.619 EAL: Detected shared linkage of DPDK 00:04:22.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.619 EAL: Selected IOVA mode 'VA' 00:04:22.619 EAL: VFIO support initialized 00:04:22.881 00:04:22.881 00:04:22.881 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.881 http://cunit.sourceforge.net/ 00:04:22.881 00:04:22.881 00:04:22.881 Suite: memory 00:04:22.881 Test: test ... 00:04:22.881 register 0x200000200000 2097152 00:04:22.881 malloc 3145728 00:04:22.881 register 0x200000400000 4194304 00:04:22.881 buf 0x200000500000 len 3145728 PASSED 00:04:22.881 malloc 64 00:04:22.881 buf 0x2000004fff40 len 64 PASSED 00:04:22.881 malloc 4194304 00:04:22.881 register 0x200000800000 6291456 00:04:22.881 buf 0x200000a00000 len 4194304 PASSED 00:04:22.881 free 0x200000500000 3145728 00:04:22.881 free 0x2000004fff40 64 00:04:22.881 unregister 0x200000400000 4194304 PASSED 00:04:22.881 free 0x200000a00000 4194304 00:04:22.881 unregister 0x200000800000 6291456 PASSED 00:04:22.881 malloc 8388608 00:04:22.881 register 0x200000400000 10485760 00:04:22.881 buf 0x200000600000 len 8388608 PASSED 00:04:22.881 free 0x200000600000 8388608 00:04:22.881 unregister 0x200000400000 10485760 PASSED 00:04:22.881 passed 00:04:22.881 00:04:22.881 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.881 suites 1 1 n/a 0 0 00:04:22.881 tests 1 1 1 0 0 00:04:22.881 asserts 15 15 15 0 n/a 00:04:22.881 00:04:22.881 Elapsed time = 0.004 seconds 00:04:22.881 00:04:22.881 real 0m0.162s 00:04:22.881 user 0m0.025s 00:04:22.881 sys 0m0.037s 00:04:22.881 10:44:42 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.881 10:44:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 ************************************ 00:04:22.881 END TEST env_mem_callbacks 00:04:22.881 ************************************ 00:04:22.881 00:04:22.881 real 0m7.727s 00:04:22.881 user 0m0.998s 00:04:22.881 sys 0m0.976s 00:04:22.881 10:44:42 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.881 10:44:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 ************************************ 00:04:22.881 END TEST env 00:04:22.881 ************************************ 00:04:22.881 10:44:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.881 10:44:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.881 10:44:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.881 10:44:42 -- common/autotest_common.sh@10 -- # set +x 00:04:22.881 ************************************ 00:04:22.881 START TEST rpc 00:04:22.881 ************************************ 00:04:22.881 10:44:42 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:22.881 * Looking for test storage... 00:04:22.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.881 10:44:42 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.881 10:44:42 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.881 10:44:42 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.143 10:44:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.143 10:44:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.143 10:44:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.143 10:44:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.143 10:44:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.143 10:44:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.143 10:44:42 rpc -- scripts/common.sh@345 -- # : 1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.143 10:44:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.143 10:44:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.143 10:44:42 rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.143 10:44:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.143 10:44:42 rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.143 10:44:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.143 10:44:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.143 10:44:42 rpc -- scripts/common.sh@368 -- # return 0 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.143 --rc genhtml_branch_coverage=1 00:04:23.143 --rc genhtml_function_coverage=1 00:04:23.143 --rc genhtml_legend=1 00:04:23.143 --rc geninfo_all_blocks=1 00:04:23.143 --rc geninfo_unexecuted_blocks=1 00:04:23.143 00:04:23.143 ' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.143 --rc genhtml_branch_coverage=1 00:04:23.143 --rc genhtml_function_coverage=1 00:04:23.143 --rc genhtml_legend=1 00:04:23.143 --rc geninfo_all_blocks=1 00:04:23.143 --rc geninfo_unexecuted_blocks=1 00:04:23.143 00:04:23.143 ' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.143 --rc genhtml_branch_coverage=1 00:04:23.143 --rc genhtml_function_coverage=1 00:04:23.143 --rc genhtml_legend=1 00:04:23.143 --rc geninfo_all_blocks=1 00:04:23.143 --rc geninfo_unexecuted_blocks=1 00:04:23.143 00:04:23.143 ' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.143 --rc genhtml_branch_coverage=1 00:04:23.143 --rc genhtml_function_coverage=1 00:04:23.143 --rc genhtml_legend=1 00:04:23.143 --rc geninfo_all_blocks=1 00:04:23.143 --rc geninfo_unexecuted_blocks=1 00:04:23.143 00:04:23.143 ' 00:04:23.143 10:44:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1594425 00:04:23.143 10:44:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.143 10:44:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1594425 00:04:23.143 10:44:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@831 -- # '[' -z 1594425 ']' 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:23.143 10:44:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.143 [2024-10-09 10:44:43.025378] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:23.143 [2024-10-09 10:44:43.025444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594425 ] 00:04:23.404 [2024-10-09 10:44:43.159418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:23.404 [2024-10-09 10:44:43.192067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.404 [2024-10-09 10:44:43.214394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:23.404 [2024-10-09 10:44:43.214435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1594425' to capture a snapshot of events at runtime. 00:04:23.404 [2024-10-09 10:44:43.214443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:23.404 [2024-10-09 10:44:43.214450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:23.404 [2024-10-09 10:44:43.214456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1594425 for offline analysis/debug. 00:04:23.404 [2024-10-09 10:44:43.215148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.974 10:44:43 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.974 10:44:43 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:23.974 10:44:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.974 10:44:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:23.974 10:44:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.974 10:44:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.974 10:44:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.974 10:44:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.974 10:44:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.974 ************************************ 00:04:23.974 START TEST rpc_integrity 00:04:23.974 ************************************ 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.974 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.974 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.974 { 00:04:23.974 "name": "Malloc0", 00:04:23.974 "aliases": [ 00:04:23.974 "e1df37ed-852c-4d32-92b0-5cc5e351bdfc" 00:04:23.974 ], 00:04:23.974 "product_name": "Malloc disk", 00:04:23.974 "block_size": 512, 00:04:23.974 "num_blocks": 16384, 00:04:23.974 "uuid": "e1df37ed-852c-4d32-92b0-5cc5e351bdfc", 00:04:23.974 "assigned_rate_limits": { 00:04:23.974 "rw_ios_per_sec": 0, 00:04:23.974 "rw_mbytes_per_sec": 0, 00:04:23.974 "r_mbytes_per_sec": 0, 00:04:23.974 "w_mbytes_per_sec": 0 00:04:23.974 }, 00:04:23.974 "claimed": false, 00:04:23.974 "zoned": false, 00:04:23.974 "supported_io_types": { 00:04:23.974 "read": true, 00:04:23.974 "write": true, 00:04:23.974 "unmap": true, 00:04:23.974 "flush": true, 00:04:23.974 "reset": true, 00:04:23.974 "nvme_admin": false, 00:04:23.974 "nvme_io": false, 00:04:23.974 "nvme_io_md": false, 00:04:23.974 "write_zeroes": true, 00:04:23.974 "zcopy": true, 00:04:23.974 "get_zone_info": false, 00:04:23.974 "zone_management": false, 00:04:23.974 "zone_append": false, 00:04:23.974 "compare": false, 00:04:23.974 "compare_and_write": false, 00:04:23.974 "abort": true, 00:04:23.974 "seek_hole": false, 00:04:23.974 "seek_data": false, 00:04:23.975 "copy": true, 00:04:23.975 "nvme_iov_md": false 00:04:23.975 }, 00:04:23.975 "memory_domains": [ 00:04:23.975 { 00:04:23.975 "dma_device_id": "system", 00:04:23.975 "dma_device_type": 1 00:04:23.975 }, 00:04:23.975 { 00:04:23.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.975 "dma_device_type": 2 00:04:23.975 } 00:04:23.975 ], 00:04:23.975 "driver_specific": {} 00:04:23.975 } 00:04:23.975 ]' 00:04:23.975 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.235 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.235 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:24.235 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.235 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.235 [2024-10-09 10:44:43.992312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:24.235 [2024-10-09 10:44:43.992345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.235 [2024-10-09 10:44:43.992358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1372200 00:04:24.235 [2024-10-09 10:44:43.992365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.235 [2024-10-09 10:44:43.993724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.235 [2024-10-09 10:44:43.993745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.235 Passthru0 00:04:24.235 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.235 10:44:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.235 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.235 10:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.235 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.235 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.235 { 00:04:24.235 "name": "Malloc0", 00:04:24.235 "aliases": [ 00:04:24.235 "e1df37ed-852c-4d32-92b0-5cc5e351bdfc" 00:04:24.235 ], 00:04:24.235 "product_name": "Malloc disk", 00:04:24.235 "block_size": 512, 00:04:24.235 "num_blocks": 16384, 00:04:24.235 "uuid": "e1df37ed-852c-4d32-92b0-5cc5e351bdfc", 00:04:24.235 "assigned_rate_limits": { 00:04:24.235 "rw_ios_per_sec": 0, 00:04:24.235 "rw_mbytes_per_sec": 0, 00:04:24.235 "r_mbytes_per_sec": 0, 00:04:24.235 "w_mbytes_per_sec": 0 00:04:24.235 }, 00:04:24.235 "claimed": true, 00:04:24.235 "claim_type": "exclusive_write", 00:04:24.235 "zoned": false, 00:04:24.235 "supported_io_types": { 00:04:24.235 "read": true, 00:04:24.235 "write": true, 00:04:24.235 "unmap": true, 00:04:24.235 "flush": true, 00:04:24.235 "reset": true, 00:04:24.235 "nvme_admin": false, 00:04:24.235 "nvme_io": false, 00:04:24.235 "nvme_io_md": false, 00:04:24.235 "write_zeroes": true, 00:04:24.235 "zcopy": true, 00:04:24.235 "get_zone_info": false, 00:04:24.235 "zone_management": false, 00:04:24.235 "zone_append": false, 00:04:24.235 "compare": false, 00:04:24.235 "compare_and_write": false, 00:04:24.235 "abort": true, 00:04:24.235 "seek_hole": false, 00:04:24.235 "seek_data": false, 00:04:24.235 "copy": true, 00:04:24.235 "nvme_iov_md": false 00:04:24.235 }, 00:04:24.235 "memory_domains": [ 00:04:24.235 { 00:04:24.235 "dma_device_id": "system", 00:04:24.235 "dma_device_type": 1 00:04:24.235 }, 00:04:24.235 { 00:04:24.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.235 "dma_device_type": 2 00:04:24.235 } 00:04:24.235 ], 00:04:24.235 "driver_specific": {} 00:04:24.235 }, 00:04:24.235 { 00:04:24.235 "name": "Passthru0", 00:04:24.235 "aliases": [ 00:04:24.235 "a504544e-5c00-5171-aade-b4a3250e71ec" 00:04:24.235 ], 00:04:24.235 "product_name": "passthru", 00:04:24.235 "block_size": 512, 00:04:24.235 "num_blocks": 16384, 00:04:24.235 "uuid": "a504544e-5c00-5171-aade-b4a3250e71ec", 00:04:24.235 "assigned_rate_limits": { 00:04:24.235 "rw_ios_per_sec": 0, 00:04:24.235 "rw_mbytes_per_sec": 0, 00:04:24.235 "r_mbytes_per_sec": 0, 00:04:24.235 "w_mbytes_per_sec": 0 00:04:24.235 }, 00:04:24.235 "claimed": false, 00:04:24.235 "zoned": false, 00:04:24.235 "supported_io_types": { 00:04:24.235 "read": true, 00:04:24.235 "write": true, 00:04:24.235 "unmap": true, 00:04:24.235 "flush": true, 00:04:24.235 "reset": true, 00:04:24.235 "nvme_admin": false, 00:04:24.235 "nvme_io": false, 00:04:24.235 "nvme_io_md": false, 00:04:24.235 "write_zeroes": true, 00:04:24.235 "zcopy": true, 00:04:24.235 "get_zone_info": false, 00:04:24.235 "zone_management": false, 00:04:24.235 "zone_append": false, 00:04:24.235 "compare": false, 00:04:24.235 "compare_and_write": false, 00:04:24.235 "abort": true, 00:04:24.235 "seek_hole": false, 00:04:24.235 "seek_data": false, 00:04:24.235 "copy": true, 00:04:24.235 "nvme_iov_md": false 00:04:24.235 }, 00:04:24.235 "memory_domains": [ 00:04:24.235 { 00:04:24.235 "dma_device_id": "system", 00:04:24.235 "dma_device_type": 1 00:04:24.235 }, 00:04:24.235 { 00:04:24.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.236 "dma_device_type": 2 00:04:24.236 } 00:04:24.236 ], 00:04:24.236 "driver_specific": { 00:04:24.236 "passthru": { 00:04:24.236 "name": "Passthru0", 00:04:24.236 "base_bdev_name": "Malloc0" 00:04:24.236 } 00:04:24.236 } 00:04:24.236 } 00:04:24.236 ]' 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.236 10:44:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.236 00:04:24.236 real 0m0.288s 00:04:24.236 user 0m0.185s 00:04:24.236 sys 0m0.041s 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 ************************************ 00:04:24.236 END TEST rpc_integrity 00:04:24.236 ************************************ 00:04:24.236 10:44:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:24.236 10:44:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.236 10:44:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.236 10:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 ************************************ 00:04:24.236 START TEST rpc_plugins 00:04:24.236 ************************************ 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:24.236 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.236 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:24.236 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.236 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.496 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.496 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:24.496 { 00:04:24.496 "name": "Malloc1", 00:04:24.496 "aliases": [ 00:04:24.496 "c6046dc1-93e1-499c-8f57-29422a5a7cdc" 00:04:24.496 ], 00:04:24.496 "product_name": "Malloc disk", 00:04:24.496 "block_size": 4096, 00:04:24.496 "num_blocks": 256, 00:04:24.496 "uuid": "c6046dc1-93e1-499c-8f57-29422a5a7cdc", 00:04:24.496 "assigned_rate_limits": { 00:04:24.496 "rw_ios_per_sec": 0, 00:04:24.496 "rw_mbytes_per_sec": 0, 00:04:24.496 "r_mbytes_per_sec": 0, 00:04:24.496 "w_mbytes_per_sec": 0 00:04:24.496 }, 00:04:24.496 "claimed": false, 00:04:24.496 "zoned": false, 00:04:24.496 "supported_io_types": { 00:04:24.496 "read": true, 00:04:24.496 "write": true, 00:04:24.496 "unmap": true, 00:04:24.496 "flush": true, 00:04:24.496 "reset": true, 00:04:24.496 "nvme_admin": false, 00:04:24.496 "nvme_io": false, 00:04:24.496 "nvme_io_md": false, 00:04:24.496 "write_zeroes": true, 00:04:24.496 "zcopy": true, 00:04:24.496 "get_zone_info": false, 00:04:24.496 "zone_management": false, 00:04:24.496 "zone_append": false, 00:04:24.496 "compare": false, 00:04:24.496 "compare_and_write": false, 00:04:24.496 "abort": true, 00:04:24.496 "seek_hole": false, 00:04:24.496 "seek_data": false, 00:04:24.496 "copy": true, 00:04:24.496 "nvme_iov_md": false 00:04:24.496 }, 00:04:24.496 "memory_domains": [ 00:04:24.496 { 00:04:24.496 "dma_device_id": "system", 00:04:24.496 "dma_device_type": 1 00:04:24.496 }, 00:04:24.496 { 00:04:24.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.497 "dma_device_type": 2 00:04:24.497 } 00:04:24.497 ], 00:04:24.497 "driver_specific": {} 00:04:24.497 } 00:04:24.497 ]' 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:24.497 10:44:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:24.497 00:04:24.497 real 0m0.145s 00:04:24.497 user 0m0.095s 00:04:24.497 sys 0m0.017s 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.497 10:44:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:24.497 ************************************ 00:04:24.497 END TEST rpc_plugins 00:04:24.497 ************************************ 00:04:24.497 10:44:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:24.497 10:44:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.497 10:44:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.497 10:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.497 ************************************ 00:04:24.497 START TEST rpc_trace_cmd_test 00:04:24.497 ************************************ 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:24.497 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1594425", 00:04:24.497 "tpoint_group_mask": "0x8", 00:04:24.497 "iscsi_conn": { 00:04:24.497 "mask": "0x2", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "scsi": { 00:04:24.497 "mask": "0x4", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "bdev": { 00:04:24.497 "mask": "0x8", 00:04:24.497 "tpoint_mask": "0xffffffffffffffff" 00:04:24.497 }, 00:04:24.497 "nvmf_rdma": { 00:04:24.497 "mask": "0x10", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "nvmf_tcp": { 00:04:24.497 "mask": "0x20", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "ftl": { 00:04:24.497 "mask": "0x40", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "blobfs": { 00:04:24.497 "mask": "0x80", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "dsa": { 00:04:24.497 "mask": "0x200", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "thread": { 00:04:24.497 "mask": "0x400", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "nvme_pcie": { 00:04:24.497 "mask": "0x800", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "iaa": { 00:04:24.497 "mask": "0x1000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "nvme_tcp": { 00:04:24.497 "mask": "0x2000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "bdev_nvme": { 00:04:24.497 "mask": "0x4000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "sock": { 00:04:24.497 "mask": "0x8000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "blob": { 00:04:24.497 "mask": "0x10000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "bdev_raid": { 00:04:24.497 "mask": "0x20000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 }, 00:04:24.497 "scheduler": { 00:04:24.497 "mask": "0x40000", 00:04:24.497 "tpoint_mask": "0x0" 00:04:24.497 } 00:04:24.497 }' 00:04:24.497 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:24.759 00:04:24.759 real 0m0.229s 00:04:24.759 user 0m0.189s 00:04:24.759 sys 0m0.031s 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.759 10:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.759 ************************************ 00:04:24.759 END TEST rpc_trace_cmd_test 00:04:24.759 ************************************ 00:04:24.759 10:44:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.759 10:44:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.759 10:44:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.759 10:44:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.759 10:44:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.759 10:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.759 ************************************ 00:04:24.759 START TEST rpc_daemon_integrity 00:04:24.759 ************************************ 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.759 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:25.020 { 00:04:25.020 "name": "Malloc2", 00:04:25.020 "aliases": [ 00:04:25.020 "f25f0f03-91ec-456c-801e-bb8898f1e34d" 00:04:25.020 ], 00:04:25.020 "product_name": "Malloc disk", 00:04:25.020 "block_size": 512, 00:04:25.020 "num_blocks": 16384, 00:04:25.020 "uuid": "f25f0f03-91ec-456c-801e-bb8898f1e34d", 00:04:25.020 "assigned_rate_limits": { 00:04:25.020 "rw_ios_per_sec": 0, 00:04:25.020 "rw_mbytes_per_sec": 0, 00:04:25.020 "r_mbytes_per_sec": 0, 00:04:25.020 "w_mbytes_per_sec": 0 00:04:25.020 }, 00:04:25.020 "claimed": false, 00:04:25.020 "zoned": false, 00:04:25.020 "supported_io_types": { 00:04:25.020 "read": true, 00:04:25.020 "write": true, 00:04:25.020 "unmap": true, 00:04:25.020 "flush": true, 00:04:25.020 "reset": true, 00:04:25.020 "nvme_admin": false, 00:04:25.020 "nvme_io": false, 00:04:25.020 "nvme_io_md": false, 00:04:25.020 "write_zeroes": true, 00:04:25.020 "zcopy": true, 00:04:25.020 "get_zone_info": false, 00:04:25.020 "zone_management": false, 00:04:25.020 "zone_append": false, 00:04:25.020 "compare": false, 00:04:25.020 "compare_and_write": false, 00:04:25.020 "abort": true, 00:04:25.020 "seek_hole": false, 00:04:25.020 "seek_data": false, 00:04:25.020 "copy": true, 00:04:25.020 "nvme_iov_md": false 00:04:25.020 }, 00:04:25.020 "memory_domains": [ 00:04:25.020 { 00:04:25.020 "dma_device_id": "system", 00:04:25.020 "dma_device_type": 1 00:04:25.020 }, 00:04:25.020 { 00:04:25.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.020 "dma_device_type": 2 00:04:25.020 } 00:04:25.020 ], 00:04:25.020 "driver_specific": {} 00:04:25.020 } 00:04:25.020 ]' 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 [2024-10-09 10:44:44.876656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:25.020 [2024-10-09 10:44:44.876685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:25.020 [2024-10-09 10:44:44.876701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1375710 00:04:25.020 [2024-10-09 10:44:44.876708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:25.020 [2024-10-09 10:44:44.877944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:25.020 [2024-10-09 10:44:44.877965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:25.020 Passthru0 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:25.020 { 00:04:25.020 "name": "Malloc2", 00:04:25.020 "aliases": [ 00:04:25.020 "f25f0f03-91ec-456c-801e-bb8898f1e34d" 00:04:25.020 ], 00:04:25.020 "product_name": "Malloc disk", 00:04:25.020 "block_size": 512, 00:04:25.020 "num_blocks": 16384, 00:04:25.020 "uuid": "f25f0f03-91ec-456c-801e-bb8898f1e34d", 00:04:25.020 "assigned_rate_limits": { 00:04:25.020 "rw_ios_per_sec": 0, 00:04:25.020 "rw_mbytes_per_sec": 0, 00:04:25.020 "r_mbytes_per_sec": 0, 00:04:25.020 "w_mbytes_per_sec": 0 00:04:25.020 }, 00:04:25.020 "claimed": true, 00:04:25.020 "claim_type": "exclusive_write", 00:04:25.020 "zoned": false, 00:04:25.020 "supported_io_types": { 00:04:25.020 "read": true, 00:04:25.020 "write": true, 00:04:25.020 "unmap": true, 00:04:25.020 "flush": true, 00:04:25.020 "reset": true, 00:04:25.020 "nvme_admin": false, 00:04:25.020 "nvme_io": false, 00:04:25.020 "nvme_io_md": false, 00:04:25.020 "write_zeroes": true, 00:04:25.020 "zcopy": true, 00:04:25.020 "get_zone_info": false, 00:04:25.020 "zone_management": false, 00:04:25.020 "zone_append": false, 00:04:25.020 "compare": false, 00:04:25.020 "compare_and_write": false, 00:04:25.020 "abort": true, 00:04:25.020 "seek_hole": false, 00:04:25.020 "seek_data": false, 00:04:25.020 "copy": true, 00:04:25.020 "nvme_iov_md": false 00:04:25.020 }, 00:04:25.020 "memory_domains": [ 00:04:25.020 { 00:04:25.020 "dma_device_id": "system", 00:04:25.020 "dma_device_type": 1 00:04:25.020 }, 00:04:25.020 { 00:04:25.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.020 "dma_device_type": 2 00:04:25.020 } 00:04:25.020 ], 00:04:25.020 "driver_specific": {} 00:04:25.020 }, 00:04:25.020 { 00:04:25.020 "name": "Passthru0", 00:04:25.020 "aliases": [ 00:04:25.020 "dad3fc26-af78-57bd-95aa-c8794683e0f2" 00:04:25.020 ], 00:04:25.020 "product_name": "passthru", 00:04:25.020 "block_size": 512, 00:04:25.020 "num_blocks": 16384, 00:04:25.020 "uuid": "dad3fc26-af78-57bd-95aa-c8794683e0f2", 00:04:25.020 "assigned_rate_limits": { 00:04:25.020 "rw_ios_per_sec": 0, 00:04:25.020 "rw_mbytes_per_sec": 0, 00:04:25.020 "r_mbytes_per_sec": 0, 00:04:25.020 "w_mbytes_per_sec": 0 00:04:25.020 }, 00:04:25.020 "claimed": false, 00:04:25.020 "zoned": false, 00:04:25.020 "supported_io_types": { 00:04:25.020 "read": true, 00:04:25.020 "write": true, 00:04:25.020 "unmap": true, 00:04:25.020 "flush": true, 00:04:25.020 "reset": true, 00:04:25.020 "nvme_admin": false, 00:04:25.020 "nvme_io": false, 00:04:25.020 "nvme_io_md": false, 00:04:25.020 "write_zeroes": true, 00:04:25.020 "zcopy": true, 00:04:25.020 "get_zone_info": false, 00:04:25.020 "zone_management": false, 00:04:25.020 "zone_append": false, 00:04:25.020 "compare": false, 00:04:25.020 "compare_and_write": false, 00:04:25.020 "abort": true, 00:04:25.020 "seek_hole": false, 00:04:25.020 "seek_data": false, 00:04:25.020 "copy": true, 00:04:25.020 "nvme_iov_md": false 00:04:25.020 }, 00:04:25.020 "memory_domains": [ 00:04:25.020 { 00:04:25.020 "dma_device_id": "system", 00:04:25.020 "dma_device_type": 1 00:04:25.020 }, 00:04:25.020 { 00:04:25.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:25.020 "dma_device_type": 2 00:04:25.020 } 00:04:25.020 ], 00:04:25.020 "driver_specific": { 00:04:25.020 "passthru": { 00:04:25.020 "name": "Passthru0", 00:04:25.020 "base_bdev_name": "Malloc2" 00:04:25.020 } 00:04:25.020 } 00:04:25.020 } 00:04:25.020 ]' 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.020 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:25.021 10:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:25.282 10:44:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:25.282 00:04:25.282 real 0m0.299s 00:04:25.282 user 0m0.177s 00:04:25.282 sys 0m0.047s 00:04:25.282 10:44:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.282 10:44:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:25.282 ************************************ 00:04:25.282 END TEST rpc_daemon_integrity 00:04:25.282 ************************************ 00:04:25.282 10:44:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:25.282 10:44:45 rpc -- rpc/rpc.sh@84 -- # killprocess 1594425 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@950 -- # '[' -z 1594425 ']' 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@954 -- # kill -0 1594425 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@955 -- # uname 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594425 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594425' 00:04:25.282 killing process with pid 1594425 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@969 -- # kill 1594425 00:04:25.282 10:44:45 rpc -- common/autotest_common.sh@974 -- # wait 1594425 00:04:25.543 00:04:25.543 real 0m2.569s 00:04:25.543 user 0m3.224s 00:04:25.543 sys 0m0.733s 00:04:25.543 10:44:45 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.543 10:44:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.543 ************************************ 00:04:25.543 END TEST rpc 00:04:25.543 ************************************ 00:04:25.543 10:44:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.543 10:44:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.543 10:44:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.543 10:44:45 -- common/autotest_common.sh@10 -- # set +x 00:04:25.543 ************************************ 00:04:25.543 START TEST skip_rpc 00:04:25.543 ************************************ 00:04:25.543 10:44:45 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:25.543 * Looking for test storage... 00:04:25.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:25.543 10:44:45 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.543 10:44:45 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.543 10:44:45 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.804 10:44:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.804 --rc genhtml_branch_coverage=1 00:04:25.804 --rc genhtml_function_coverage=1 00:04:25.804 --rc genhtml_legend=1 00:04:25.804 --rc geninfo_all_blocks=1 00:04:25.804 --rc geninfo_unexecuted_blocks=1 00:04:25.804 00:04:25.804 ' 00:04:25.804 10:44:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.804 10:44:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:25.804 10:44:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.804 10:44:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.804 ************************************ 00:04:25.804 START TEST skip_rpc 00:04:25.804 ************************************ 00:04:25.804 10:44:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:25.804 10:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1595267 00:04:25.804 10:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.804 10:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.804 10:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.804 [2024-10-09 10:44:45.708294] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:25.804 [2024-10-09 10:44:45.708342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595267 ] 00:04:26.071 [2024-10-09 10:44:45.839171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:26.071 [2024-10-09 10:44:45.870985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.071 [2024-10-09 10:44:45.888890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1595267 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1595267 ']' 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1595267 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595267 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595267' 00:04:31.352 killing process with pid 1595267 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1595267 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1595267 00:04:31.352 00:04:31.352 real 0m5.275s 00:04:31.352 user 0m4.993s 00:04:31.352 sys 0m0.235s 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.352 10:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.352 ************************************ 00:04:31.352 END TEST skip_rpc 00:04:31.352 ************************************ 00:04:31.352 10:44:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:31.352 10:44:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.352 10:44:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.352 10:44:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.352 ************************************ 00:04:31.352 START TEST skip_rpc_with_json 00:04:31.352 ************************************ 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1596316 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1596316 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1596316 ']' 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.352 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.352 [2024-10-09 10:44:51.066189] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:31.352 [2024-10-09 10:44:51.066237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596316 ] 00:04:31.352 [2024-10-09 10:44:51.197725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:31.352 [2024-10-09 10:44:51.228377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.352 [2024-10-09 10:44:51.246061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.922 [2024-10-09 10:44:51.847147] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:31.922 request: 00:04:31.922 { 00:04:31.922 "trtype": "tcp", 00:04:31.922 "method": "nvmf_get_transports", 00:04:31.922 "req_id": 1 00:04:31.922 } 00:04:31.922 Got JSON-RPC error response 00:04:31.922 response: 00:04:31.922 { 00:04:31.922 "code": -19, 00:04:31.922 "message": "No such device" 00:04:31.922 } 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.922 [2024-10-09 10:44:51.859243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.922 10:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.183 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.183 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:32.183 { 00:04:32.183 "subsystems": [ 00:04:32.183 { 00:04:32.183 "subsystem": "fsdev", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "fsdev_set_opts", 00:04:32.183 "params": { 00:04:32.183 "fsdev_io_pool_size": 65535, 00:04:32.183 "fsdev_io_cache_size": 256 00:04:32.183 } 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "vfio_user_target", 00:04:32.183 "config": null 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "keyring", 00:04:32.183 "config": [] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "iobuf", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "iobuf_set_options", 00:04:32.183 "params": { 00:04:32.183 "small_pool_count": 8192, 00:04:32.183 "large_pool_count": 1024, 00:04:32.183 "small_bufsize": 8192, 00:04:32.183 "large_bufsize": 135168 00:04:32.183 } 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "sock", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "sock_set_default_impl", 00:04:32.183 "params": { 00:04:32.183 "impl_name": "posix" 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "sock_impl_set_options", 00:04:32.183 "params": { 00:04:32.183 "impl_name": "ssl", 00:04:32.183 "recv_buf_size": 4096, 00:04:32.183 "send_buf_size": 4096, 00:04:32.183 "enable_recv_pipe": true, 00:04:32.183 "enable_quickack": false, 00:04:32.183 "enable_placement_id": 0, 00:04:32.183 "enable_zerocopy_send_server": true, 00:04:32.183 "enable_zerocopy_send_client": false, 00:04:32.183 "zerocopy_threshold": 0, 00:04:32.183 "tls_version": 0, 00:04:32.183 "enable_ktls": false 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "sock_impl_set_options", 00:04:32.183 "params": { 00:04:32.183 "impl_name": "posix", 00:04:32.183 "recv_buf_size": 2097152, 00:04:32.183 "send_buf_size": 2097152, 00:04:32.183 "enable_recv_pipe": true, 00:04:32.183 "enable_quickack": false, 00:04:32.183 "enable_placement_id": 0, 00:04:32.183 "enable_zerocopy_send_server": true, 00:04:32.183 "enable_zerocopy_send_client": false, 00:04:32.183 "zerocopy_threshold": 0, 00:04:32.183 "tls_version": 0, 00:04:32.183 "enable_ktls": false 00:04:32.183 } 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "vmd", 00:04:32.183 "config": [] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "accel", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "accel_set_options", 00:04:32.183 "params": { 00:04:32.183 "small_cache_size": 128, 00:04:32.183 "large_cache_size": 16, 00:04:32.183 "task_count": 2048, 00:04:32.183 "sequence_count": 2048, 00:04:32.183 "buf_count": 2048 00:04:32.183 } 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "bdev", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "bdev_set_options", 00:04:32.183 "params": { 00:04:32.183 "bdev_io_pool_size": 65535, 00:04:32.183 "bdev_io_cache_size": 256, 00:04:32.183 "bdev_auto_examine": true, 00:04:32.183 "iobuf_small_cache_size": 128, 00:04:32.183 "iobuf_large_cache_size": 16 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "bdev_raid_set_options", 00:04:32.183 "params": { 00:04:32.183 "process_window_size_kb": 1024, 00:04:32.183 "process_max_bandwidth_mb_sec": 0 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "bdev_iscsi_set_options", 00:04:32.183 "params": { 00:04:32.183 "timeout_sec": 30 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "bdev_nvme_set_options", 00:04:32.183 "params": { 00:04:32.183 "action_on_timeout": "none", 00:04:32.183 "timeout_us": 0, 00:04:32.183 "timeout_admin_us": 0, 00:04:32.183 "keep_alive_timeout_ms": 10000, 00:04:32.183 "arbitration_burst": 0, 00:04:32.183 "low_priority_weight": 0, 00:04:32.183 "medium_priority_weight": 0, 00:04:32.183 "high_priority_weight": 0, 00:04:32.183 "nvme_adminq_poll_period_us": 10000, 00:04:32.183 "nvme_ioq_poll_period_us": 0, 00:04:32.183 "io_queue_requests": 0, 00:04:32.183 "delay_cmd_submit": true, 00:04:32.183 "transport_retry_count": 4, 00:04:32.183 "bdev_retry_count": 3, 00:04:32.183 "transport_ack_timeout": 0, 00:04:32.183 "ctrlr_loss_timeout_sec": 0, 00:04:32.183 "reconnect_delay_sec": 0, 00:04:32.183 "fast_io_fail_timeout_sec": 0, 00:04:32.183 "disable_auto_failback": false, 00:04:32.183 "generate_uuids": false, 00:04:32.183 "transport_tos": 0, 00:04:32.183 "nvme_error_stat": false, 00:04:32.183 "rdma_srq_size": 0, 00:04:32.183 "io_path_stat": false, 00:04:32.183 "allow_accel_sequence": false, 00:04:32.183 "rdma_max_cq_size": 0, 00:04:32.183 "rdma_cm_event_timeout_ms": 0, 00:04:32.183 "dhchap_digests": [ 00:04:32.183 "sha256", 00:04:32.183 "sha384", 00:04:32.183 "sha512" 00:04:32.183 ], 00:04:32.183 "dhchap_dhgroups": [ 00:04:32.183 "null", 00:04:32.183 "ffdhe2048", 00:04:32.183 "ffdhe3072", 00:04:32.183 "ffdhe4096", 00:04:32.183 "ffdhe6144", 00:04:32.183 "ffdhe8192" 00:04:32.183 ] 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "bdev_nvme_set_hotplug", 00:04:32.183 "params": { 00:04:32.183 "period_us": 100000, 00:04:32.183 "enable": false 00:04:32.183 } 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "method": "bdev_wait_for_examine" 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "scsi", 00:04:32.183 "config": null 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "scheduler", 00:04:32.183 "config": [ 00:04:32.183 { 00:04:32.183 "method": "framework_set_scheduler", 00:04:32.183 "params": { 00:04:32.183 "name": "static" 00:04:32.183 } 00:04:32.183 } 00:04:32.183 ] 00:04:32.183 }, 00:04:32.183 { 00:04:32.183 "subsystem": "vhost_scsi", 00:04:32.183 "config": [] 00:04:32.183 }, 00:04:32.183 { 00:04:32.184 "subsystem": "vhost_blk", 00:04:32.184 "config": [] 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "subsystem": "ublk", 00:04:32.184 "config": [] 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "subsystem": "nbd", 00:04:32.184 "config": [] 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "subsystem": "nvmf", 00:04:32.184 "config": [ 00:04:32.184 { 00:04:32.184 "method": "nvmf_set_config", 00:04:32.184 "params": { 00:04:32.184 "discovery_filter": "match_any", 00:04:32.184 "admin_cmd_passthru": { 00:04:32.184 "identify_ctrlr": false 00:04:32.184 }, 00:04:32.184 "dhchap_digests": [ 00:04:32.184 "sha256", 00:04:32.184 "sha384", 00:04:32.184 "sha512" 00:04:32.184 ], 00:04:32.184 "dhchap_dhgroups": [ 00:04:32.184 "null", 00:04:32.184 "ffdhe2048", 00:04:32.184 "ffdhe3072", 00:04:32.184 "ffdhe4096", 00:04:32.184 "ffdhe6144", 00:04:32.184 "ffdhe8192" 00:04:32.184 ] 00:04:32.184 } 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "method": "nvmf_set_max_subsystems", 00:04:32.184 "params": { 00:04:32.184 "max_subsystems": 1024 00:04:32.184 } 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "method": "nvmf_set_crdt", 00:04:32.184 "params": { 00:04:32.184 "crdt1": 0, 00:04:32.184 "crdt2": 0, 00:04:32.184 "crdt3": 0 00:04:32.184 } 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "method": "nvmf_create_transport", 00:04:32.184 "params": { 00:04:32.184 "trtype": "TCP", 00:04:32.184 "max_queue_depth": 128, 00:04:32.184 "max_io_qpairs_per_ctrlr": 127, 00:04:32.184 "in_capsule_data_size": 4096, 00:04:32.184 "max_io_size": 131072, 00:04:32.184 "io_unit_size": 131072, 00:04:32.184 "max_aq_depth": 128, 00:04:32.184 "num_shared_buffers": 511, 00:04:32.184 "buf_cache_size": 4294967295, 00:04:32.184 "dif_insert_or_strip": false, 00:04:32.184 "zcopy": false, 00:04:32.184 "c2h_success": true, 00:04:32.184 "sock_priority": 0, 00:04:32.184 "abort_timeout_sec": 1, 00:04:32.184 "ack_timeout": 0, 00:04:32.184 "data_wr_pool_size": 0 00:04:32.184 } 00:04:32.184 } 00:04:32.184 ] 00:04:32.184 }, 00:04:32.184 { 00:04:32.184 "subsystem": "iscsi", 00:04:32.184 "config": [ 00:04:32.184 { 00:04:32.184 "method": "iscsi_set_options", 00:04:32.184 "params": { 00:04:32.184 "node_base": "iqn.2016-06.io.spdk", 00:04:32.184 "max_sessions": 128, 00:04:32.184 "max_connections_per_session": 2, 00:04:32.184 "max_queue_depth": 64, 00:04:32.184 "default_time2wait": 2, 00:04:32.184 "default_time2retain": 20, 00:04:32.184 "first_burst_length": 8192, 00:04:32.184 "immediate_data": true, 00:04:32.184 "allow_duplicated_isid": false, 00:04:32.184 "error_recovery_level": 0, 00:04:32.184 "nop_timeout": 60, 00:04:32.184 "nop_in_interval": 30, 00:04:32.184 "disable_chap": false, 00:04:32.184 "require_chap": false, 00:04:32.184 "mutual_chap": false, 00:04:32.184 "chap_group": 0, 00:04:32.184 "max_large_datain_per_connection": 64, 00:04:32.184 "max_r2t_per_connection": 4, 00:04:32.184 "pdu_pool_size": 36864, 00:04:32.184 "immediate_data_pool_size": 16384, 00:04:32.184 "data_out_pool_size": 2048 00:04:32.184 } 00:04:32.184 } 00:04:32.184 ] 00:04:32.184 } 00:04:32.184 ] 00:04:32.184 } 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1596316 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1596316 ']' 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1596316 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596316 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596316' 00:04:32.184 killing process with pid 1596316 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1596316 00:04:32.184 10:44:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1596316 00:04:32.445 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1596652 00:04:32.445 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.445 10:44:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1596652 ']' 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596652' 00:04:37.730 killing process with pid 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1596652 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.730 00:04:37.730 real 0m6.557s 00:04:37.730 user 0m6.286s 00:04:37.730 sys 0m0.532s 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.730 ************************************ 00:04:37.730 END TEST skip_rpc_with_json 00:04:37.730 ************************************ 00:04:37.730 10:44:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.730 10:44:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.730 10:44:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.730 10:44:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.730 ************************************ 00:04:37.730 START TEST skip_rpc_with_delay 00:04:37.730 ************************************ 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.730 [2024-10-09 10:44:57.707373] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.730 00:04:37.730 real 0m0.077s 00:04:37.730 user 0m0.052s 00:04:37.730 sys 0m0.025s 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.730 10:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.730 ************************************ 00:04:37.730 END TEST skip_rpc_with_delay 00:04:37.730 ************************************ 00:04:37.990 10:44:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.990 10:44:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.990 10:44:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.990 10:44:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.990 10:44:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.990 10:44:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.990 ************************************ 00:04:37.990 START TEST exit_on_failed_rpc_init 00:04:37.990 ************************************ 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1597717 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1597717 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1597717 ']' 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.990 10:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.990 [2024-10-09 10:44:57.864178] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:37.990 [2024-10-09 10:44:57.864227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597717 ] 00:04:38.250 [2024-10-09 10:44:57.996032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:38.250 [2024-10-09 10:44:58.027194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.250 [2024-10-09 10:44:58.045382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:38.821 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.821 [2024-10-09 10:44:58.689959] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:38.821 [2024-10-09 10:44:58.690012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597872 ] 00:04:38.821 [2024-10-09 10:44:58.822198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:39.081 [2024-10-09 10:44:58.869694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.081 [2024-10-09 10:44:58.887705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.081 [2024-10-09 10:44:58.887753] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.081 [2024-10-09 10:44:58.887762] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.081 [2024-10-09 10:44:58.887768] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1597717 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1597717 ']' 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1597717 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597717 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597717' 00:04:39.081 killing process with pid 1597717 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1597717 00:04:39.081 10:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1597717 00:04:39.341 00:04:39.341 real 0m1.372s 00:04:39.341 user 0m1.458s 00:04:39.341 sys 0m0.366s 00:04:39.341 10:44:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.341 10:44:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.341 ************************************ 00:04:39.341 END TEST exit_on_failed_rpc_init 00:04:39.341 ************************************ 00:04:39.341 10:44:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.341 00:04:39.341 real 0m13.814s 00:04:39.341 user 0m13.035s 00:04:39.341 sys 0m1.473s 00:04:39.341 10:44:59 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.341 10:44:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.341 ************************************ 00:04:39.341 END TEST skip_rpc 00:04:39.341 ************************************ 00:04:39.341 10:44:59 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.341 10:44:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.341 10:44:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.341 10:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:39.341 ************************************ 00:04:39.341 START TEST rpc_client 00:04:39.341 ************************************ 00:04:39.341 10:44:59 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:39.601 * Looking for test storage... 00:04:39.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:39.601 10:44:59 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.601 10:44:59 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.601 10:44:59 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.601 10:44:59 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.602 10:44:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.602 --rc genhtml_branch_coverage=1 00:04:39.602 --rc genhtml_function_coverage=1 00:04:39.602 --rc genhtml_legend=1 00:04:39.602 --rc geninfo_all_blocks=1 00:04:39.602 --rc geninfo_unexecuted_blocks=1 00:04:39.602 00:04:39.602 ' 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.602 --rc genhtml_branch_coverage=1 00:04:39.602 --rc genhtml_function_coverage=1 00:04:39.602 --rc genhtml_legend=1 00:04:39.602 --rc geninfo_all_blocks=1 00:04:39.602 --rc geninfo_unexecuted_blocks=1 00:04:39.602 00:04:39.602 ' 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.602 --rc genhtml_branch_coverage=1 00:04:39.602 --rc genhtml_function_coverage=1 00:04:39.602 --rc genhtml_legend=1 00:04:39.602 --rc geninfo_all_blocks=1 00:04:39.602 --rc geninfo_unexecuted_blocks=1 00:04:39.602 00:04:39.602 ' 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.602 --rc genhtml_branch_coverage=1 00:04:39.602 --rc genhtml_function_coverage=1 00:04:39.602 --rc genhtml_legend=1 00:04:39.602 --rc geninfo_all_blocks=1 00:04:39.602 --rc geninfo_unexecuted_blocks=1 00:04:39.602 00:04:39.602 ' 00:04:39.602 10:44:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:39.602 OK 00:04:39.602 10:44:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:39.602 00:04:39.602 real 0m0.225s 00:04:39.602 user 0m0.130s 00:04:39.602 sys 0m0.109s 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.602 10:44:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:39.602 ************************************ 00:04:39.602 END TEST rpc_client 00:04:39.602 ************************************ 00:04:39.602 10:44:59 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.602 10:44:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.602 10:44:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.602 10:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:39.602 ************************************ 00:04:39.602 START TEST json_config 00:04:39.602 ************************************ 00:04:39.602 10:44:59 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.863 10:44:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.863 10:44:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.863 10:44:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.863 10:44:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.863 10:44:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.863 10:44:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:39.863 10:44:59 json_config -- scripts/common.sh@345 -- # : 1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.863 10:44:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.863 10:44:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@353 -- # local d=1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.863 10:44:59 json_config -- scripts/common.sh@355 -- # echo 1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.863 10:44:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@353 -- # local d=2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.863 10:44:59 json_config -- scripts/common.sh@355 -- # echo 2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.863 10:44:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.863 10:44:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.863 10:44:59 json_config -- scripts/common.sh@368 -- # return 0 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.863 --rc genhtml_branch_coverage=1 00:04:39.863 --rc genhtml_function_coverage=1 00:04:39.863 --rc genhtml_legend=1 00:04:39.863 --rc geninfo_all_blocks=1 00:04:39.863 --rc geninfo_unexecuted_blocks=1 00:04:39.863 00:04:39.863 ' 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.863 --rc genhtml_branch_coverage=1 00:04:39.863 --rc genhtml_function_coverage=1 00:04:39.863 --rc genhtml_legend=1 00:04:39.863 --rc geninfo_all_blocks=1 00:04:39.863 --rc geninfo_unexecuted_blocks=1 00:04:39.863 00:04:39.863 ' 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.863 --rc genhtml_branch_coverage=1 00:04:39.863 --rc genhtml_function_coverage=1 00:04:39.863 --rc genhtml_legend=1 00:04:39.863 --rc geninfo_all_blocks=1 00:04:39.863 --rc geninfo_unexecuted_blocks=1 00:04:39.863 00:04:39.863 ' 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.863 --rc genhtml_branch_coverage=1 00:04:39.863 --rc genhtml_function_coverage=1 00:04:39.863 --rc genhtml_legend=1 00:04:39.863 --rc geninfo_all_blocks=1 00:04:39.863 --rc geninfo_unexecuted_blocks=1 00:04:39.863 00:04:39.863 ' 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.863 10:44:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.863 10:44:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.863 10:44:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.863 10:44:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.863 10:44:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.863 10:44:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.863 10:44:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.863 10:44:59 json_config -- paths/export.sh@5 -- # export PATH 00:04:39.863 10:44:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@51 -- # : 0 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.863 10:44:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:39.863 INFO: JSON configuration test init 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.863 10:44:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:39.863 10:44:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.864 10:44:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:39.864 10:44:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.864 10:44:59 json_config -- json_config/common.sh@10 -- # shift 00:04:39.864 10:44:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.864 10:44:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.864 10:44:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.864 10:44:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.864 10:44:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.864 10:44:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1598194 00:04:39.864 10:44:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.864 Waiting for target to run... 00:04:39.864 10:44:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1598194 /var/tmp/spdk_tgt.sock 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@831 -- # '[' -z 1598194 ']' 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.864 10:44:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.864 10:44:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.124 [2024-10-09 10:44:59.873905] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:40.124 [2024-10-09 10:44:59.873958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598194 ] 00:04:40.384 [2024-10-09 10:45:00.245669] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:40.384 [2024-10-09 10:45:00.280953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.384 [2024-10-09 10:45:00.297063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:40.956 10:45:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:40.956 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:40.956 10:45:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:40.956 10:45:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:40.956 10:45:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:41.527 10:45:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@54 -- # sort 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:41.527 10:45:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:41.527 10:45:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.527 10:45:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:41.788 MallocForNvmf0 00:04:41.788 10:45:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:41.788 10:45:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:42.049 MallocForNvmf1 00:04:42.049 10:45:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.049 10:45:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:42.049 [2024-10-09 10:45:02.015557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.049 10:45:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.049 10:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:42.310 10:45:02 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.310 10:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:42.570 10:45:02 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.570 10:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:42.831 10:45:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.831 10:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:42.831 [2024-10-09 10:45:02.732162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.831 10:45:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:42.831 10:45:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:42.831 10:45:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.831 10:45:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:42.831 10:45:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:42.831 10:45:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.091 10:45:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:43.091 10:45:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.091 10:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:43.091 MallocBdevForConfigChangeCheck 00:04:43.091 10:45:03 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:43.091 10:45:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.091 10:45:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.091 10:45:03 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:43.091 10:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.662 10:45:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:43.662 INFO: shutting down applications... 00:04:43.662 10:45:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:43.662 10:45:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:43.662 10:45:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:43.662 10:45:03 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:43.922 Calling clear_iscsi_subsystem 00:04:43.922 Calling clear_nvmf_subsystem 00:04:43.922 Calling clear_nbd_subsystem 00:04:43.922 Calling clear_ublk_subsystem 00:04:43.922 Calling clear_vhost_blk_subsystem 00:04:43.922 Calling clear_vhost_scsi_subsystem 00:04:43.922 Calling clear_bdev_subsystem 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:43.922 10:45:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:44.492 10:45:04 json_config -- json_config/json_config.sh@352 -- # break 00:04:44.492 10:45:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:44.492 10:45:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:44.492 10:45:04 json_config -- json_config/common.sh@31 -- # local app=target 00:04:44.492 10:45:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.492 10:45:04 json_config -- json_config/common.sh@35 -- # [[ -n 1598194 ]] 00:04:44.492 10:45:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1598194 00:04:44.492 10:45:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.492 10:45:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.492 10:45:04 json_config -- json_config/common.sh@41 -- # kill -0 1598194 00:04:44.492 10:45:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.753 10:45:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.753 10:45:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.753 10:45:04 json_config -- json_config/common.sh@41 -- # kill -0 1598194 00:04:44.753 10:45:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.753 10:45:04 json_config -- json_config/common.sh@43 -- # break 00:04:44.753 10:45:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.753 10:45:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.753 SPDK target shutdown done 00:04:44.753 10:45:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:44.753 INFO: relaunching applications... 00:04:44.753 10:45:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.753 10:45:04 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.753 10:45:04 json_config -- json_config/common.sh@10 -- # shift 00:04:44.753 10:45:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.753 10:45:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.753 10:45:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.753 10:45:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.753 10:45:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.753 10:45:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1599439 00:04:44.753 10:45:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.753 Waiting for target to run... 00:04:44.753 10:45:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1599439 /var/tmp/spdk_tgt.sock 00:04:44.753 10:45:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 1599439 ']' 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.753 10:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.015 [2024-10-09 10:45:04.771066] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:45.015 [2024-10-09 10:45:04.771142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599439 ] 00:04:45.275 [2024-10-09 10:45:05.251575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:45.536 [2024-10-09 10:45:05.287742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.536 [2024-10-09 10:45:05.304332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.796 [2024-10-09 10:45:05.787486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.055 [2024-10-09 10:45:05.819803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.055 10:45:05 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.055 10:45:05 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:46.055 10:45:05 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.055 00:04:46.055 10:45:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:46.055 10:45:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:46.055 INFO: Checking if target configuration is the same... 00:04:46.055 10:45:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.055 10:45:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:46.055 10:45:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.055 + '[' 2 -ne 2 ']' 00:04:46.055 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.055 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:46.055 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.055 +++ basename /dev/fd/62 00:04:46.055 ++ mktemp /tmp/62.XXX 00:04:46.055 + tmp_file_1=/tmp/62.lKx 00:04:46.055 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.055 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.055 + tmp_file_2=/tmp/spdk_tgt_config.json.PsN 00:04:46.055 + ret=0 00:04:46.055 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.315 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.315 + diff -u /tmp/62.lKx /tmp/spdk_tgt_config.json.PsN 00:04:46.315 + echo 'INFO: JSON config files are the same' 00:04:46.315 INFO: JSON config files are the same 00:04:46.315 + rm /tmp/62.lKx /tmp/spdk_tgt_config.json.PsN 00:04:46.315 + exit 0 00:04:46.315 10:45:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:46.315 10:45:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:46.315 INFO: changing configuration and checking if this can be detected... 00:04:46.315 10:45:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.315 10:45:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.575 10:45:06 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.575 10:45:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:46.575 10:45:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.575 + '[' 2 -ne 2 ']' 00:04:46.575 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.575 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:46.575 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.575 +++ basename /dev/fd/62 00:04:46.575 ++ mktemp /tmp/62.XXX 00:04:46.575 + tmp_file_1=/tmp/62.s71 00:04:46.575 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.575 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.575 + tmp_file_2=/tmp/spdk_tgt_config.json.asB 00:04:46.575 + ret=0 00:04:46.575 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.835 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.835 + diff -u /tmp/62.s71 /tmp/spdk_tgt_config.json.asB 00:04:46.835 + ret=1 00:04:46.835 + echo '=== Start of file: /tmp/62.s71 ===' 00:04:46.835 + cat /tmp/62.s71 00:04:46.835 + echo '=== End of file: /tmp/62.s71 ===' 00:04:46.835 + echo '' 00:04:46.835 + echo '=== Start of file: /tmp/spdk_tgt_config.json.asB ===' 00:04:46.835 + cat /tmp/spdk_tgt_config.json.asB 00:04:46.835 + echo '=== End of file: /tmp/spdk_tgt_config.json.asB ===' 00:04:46.835 + echo '' 00:04:46.835 + rm /tmp/62.s71 /tmp/spdk_tgt_config.json.asB 00:04:46.835 + exit 1 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:46.835 INFO: configuration change detected. 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 1599439 ]] 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:46.835 10:45:06 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.835 10:45:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.095 10:45:06 json_config -- json_config/json_config.sh@330 -- # killprocess 1599439 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@950 -- # '[' -z 1599439 ']' 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@954 -- # kill -0 1599439 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@955 -- # uname 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599439 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599439' 00:04:47.095 killing process with pid 1599439 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@969 -- # kill 1599439 00:04:47.095 10:45:06 json_config -- common/autotest_common.sh@974 -- # wait 1599439 00:04:47.356 10:45:07 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.356 10:45:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:47.356 10:45:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.356 10:45:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.356 10:45:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:47.356 10:45:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:47.356 INFO: Success 00:04:47.356 00:04:47.356 real 0m7.638s 00:04:47.356 user 0m8.920s 00:04:47.356 sys 0m2.079s 00:04:47.356 10:45:07 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.356 10:45:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.356 ************************************ 00:04:47.356 END TEST json_config 00:04:47.356 ************************************ 00:04:47.356 10:45:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.356 10:45:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.356 10:45:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.356 10:45:07 -- common/autotest_common.sh@10 -- # set +x 00:04:47.356 ************************************ 00:04:47.356 START TEST json_config_extra_key 00:04:47.356 ************************************ 00:04:47.356 10:45:07 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.617 10:45:07 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.617 10:45:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.617 10:45:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.617 10:45:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.617 10:45:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.618 --rc genhtml_branch_coverage=1 00:04:47.618 --rc genhtml_function_coverage=1 00:04:47.618 --rc genhtml_legend=1 00:04:47.618 --rc geninfo_all_blocks=1 00:04:47.618 --rc geninfo_unexecuted_blocks=1 00:04:47.618 00:04:47.618 ' 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.618 --rc genhtml_branch_coverage=1 00:04:47.618 --rc genhtml_function_coverage=1 00:04:47.618 --rc genhtml_legend=1 00:04:47.618 --rc geninfo_all_blocks=1 00:04:47.618 --rc geninfo_unexecuted_blocks=1 00:04:47.618 00:04:47.618 ' 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.618 --rc genhtml_branch_coverage=1 00:04:47.618 --rc genhtml_function_coverage=1 00:04:47.618 --rc genhtml_legend=1 00:04:47.618 --rc geninfo_all_blocks=1 00:04:47.618 --rc geninfo_unexecuted_blocks=1 00:04:47.618 00:04:47.618 ' 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.618 --rc genhtml_branch_coverage=1 00:04:47.618 --rc genhtml_function_coverage=1 00:04:47.618 --rc genhtml_legend=1 00:04:47.618 --rc geninfo_all_blocks=1 00:04:47.618 --rc geninfo_unexecuted_blocks=1 00:04:47.618 00:04:47.618 ' 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.618 10:45:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.618 10:45:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.618 10:45:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.618 10:45:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.618 10:45:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.618 10:45:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.618 10:45:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.618 INFO: launching applications... 00:04:47.618 10:45:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1600227 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.618 Waiting for target to run... 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1600227 /var/tmp/spdk_tgt.sock 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1600227 ']' 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.618 10:45:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.618 10:45:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.618 [2024-10-09 10:45:07.569475] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:47.618 [2024-10-09 10:45:07.569553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600227 ] 00:04:48.189 [2024-10-09 10:45:07.928931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:48.189 [2024-10-09 10:45:07.964226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.189 [2024-10-09 10:45:07.979190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.449 10:45:08 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.449 10:45:08 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:48.449 00:04:48.449 10:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:48.449 INFO: shutting down applications... 00:04:48.449 10:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1600227 ]] 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1600227 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1600227 00:04:48.449 10:45:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1600227 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.108 10:45:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.108 SPDK target shutdown done 00:04:49.108 10:45:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:49.108 Success 00:04:49.108 00:04:49.108 real 0m1.539s 00:04:49.108 user 0m1.049s 00:04:49.108 sys 0m0.415s 00:04:49.108 10:45:08 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.108 10:45:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.108 ************************************ 00:04:49.108 END TEST json_config_extra_key 00:04:49.108 ************************************ 00:04:49.108 10:45:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.108 10:45:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.108 10:45:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.108 10:45:08 -- common/autotest_common.sh@10 -- # set +x 00:04:49.108 ************************************ 00:04:49.108 START TEST alias_rpc 00:04:49.108 ************************************ 00:04:49.108 10:45:08 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.108 * Looking for test storage... 00:04:49.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:49.108 10:45:09 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.108 10:45:09 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.108 10:45:09 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.368 10:45:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 10:45:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.368 10:45:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1600843 00:04:49.368 10:45:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1600843 00:04:49.368 10:45:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1600843 ']' 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.368 10:45:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.368 [2024-10-09 10:45:09.200267] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:49.369 [2024-10-09 10:45:09.200336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600843 ] 00:04:49.369 [2024-10-09 10:45:09.334931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:49.369 [2024-10-09 10:45:09.367191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.629 [2024-10-09 10:45:09.385416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.199 10:45:10 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.199 10:45:10 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:50.199 10:45:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:50.460 10:45:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1600843 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1600843 ']' 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1600843 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600843 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600843' 00:04:50.460 killing process with pid 1600843 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@969 -- # kill 1600843 00:04:50.460 10:45:10 alias_rpc -- common/autotest_common.sh@974 -- # wait 1600843 00:04:50.721 00:04:50.721 real 0m1.526s 00:04:50.721 user 0m1.593s 00:04:50.721 sys 0m0.415s 00:04:50.721 10:45:10 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.721 10:45:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.721 ************************************ 00:04:50.721 END TEST alias_rpc 00:04:50.721 ************************************ 00:04:50.721 10:45:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:50.721 10:45:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.721 10:45:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.721 10:45:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.721 10:45:10 -- common/autotest_common.sh@10 -- # set +x 00:04:50.721 ************************************ 00:04:50.721 START TEST spdkcli_tcp 00:04:50.721 ************************************ 00:04:50.721 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.721 * Looking for test storage... 00:04:50.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:50.721 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.721 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.721 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.721 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.721 10:45:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.983 10:45:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.984 10:45:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.984 --rc genhtml_branch_coverage=1 00:04:50.984 --rc genhtml_function_coverage=1 00:04:50.984 --rc genhtml_legend=1 00:04:50.984 --rc geninfo_all_blocks=1 00:04:50.984 --rc geninfo_unexecuted_blocks=1 00:04:50.984 00:04:50.984 ' 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.984 --rc genhtml_branch_coverage=1 00:04:50.984 --rc genhtml_function_coverage=1 00:04:50.984 --rc genhtml_legend=1 00:04:50.984 --rc geninfo_all_blocks=1 00:04:50.984 --rc geninfo_unexecuted_blocks=1 00:04:50.984 00:04:50.984 ' 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.984 --rc genhtml_branch_coverage=1 00:04:50.984 --rc genhtml_function_coverage=1 00:04:50.984 --rc genhtml_legend=1 00:04:50.984 --rc geninfo_all_blocks=1 00:04:50.984 --rc geninfo_unexecuted_blocks=1 00:04:50.984 00:04:50.984 ' 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.984 --rc genhtml_branch_coverage=1 00:04:50.984 --rc genhtml_function_coverage=1 00:04:50.984 --rc genhtml_legend=1 00:04:50.984 --rc geninfo_all_blocks=1 00:04:50.984 --rc geninfo_unexecuted_blocks=1 00:04:50.984 00:04:50.984 ' 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1601456 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1601456 00:04:50.984 10:45:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1601456 ']' 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.984 10:45:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.984 [2024-10-09 10:45:10.807906] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:50.984 [2024-10-09 10:45:10.807978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601456 ] 00:04:50.984 [2024-10-09 10:45:10.942515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.984 [2024-10-09 10:45:10.976968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.245 [2024-10-09 10:45:11.001790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.245 [2024-10-09 10:45:11.001791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.816 10:45:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.816 10:45:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:51.816 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1601485 00:04:51.816 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:51.816 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.816 [ 00:04:51.816 "bdev_malloc_delete", 00:04:51.816 "bdev_malloc_create", 00:04:51.816 "bdev_null_resize", 00:04:51.816 "bdev_null_delete", 00:04:51.816 "bdev_null_create", 00:04:51.816 "bdev_nvme_cuse_unregister", 00:04:51.816 "bdev_nvme_cuse_register", 00:04:51.816 "bdev_opal_new_user", 00:04:51.816 "bdev_opal_set_lock_state", 00:04:51.816 "bdev_opal_delete", 00:04:51.816 "bdev_opal_get_info", 00:04:51.816 "bdev_opal_create", 00:04:51.816 "bdev_nvme_opal_revert", 00:04:51.816 "bdev_nvme_opal_init", 00:04:51.816 "bdev_nvme_send_cmd", 00:04:51.816 "bdev_nvme_set_keys", 00:04:51.816 "bdev_nvme_get_path_iostat", 00:04:51.816 "bdev_nvme_get_mdns_discovery_info", 00:04:51.816 "bdev_nvme_stop_mdns_discovery", 00:04:51.816 "bdev_nvme_start_mdns_discovery", 00:04:51.816 "bdev_nvme_set_multipath_policy", 00:04:51.816 "bdev_nvme_set_preferred_path", 00:04:51.816 "bdev_nvme_get_io_paths", 00:04:51.816 "bdev_nvme_remove_error_injection", 00:04:51.816 "bdev_nvme_add_error_injection", 00:04:51.816 "bdev_nvme_get_discovery_info", 00:04:51.816 "bdev_nvme_stop_discovery", 00:04:51.816 "bdev_nvme_start_discovery", 00:04:51.816 "bdev_nvme_get_controller_health_info", 00:04:51.816 "bdev_nvme_disable_controller", 00:04:51.816 "bdev_nvme_enable_controller", 00:04:51.816 "bdev_nvme_reset_controller", 00:04:51.816 "bdev_nvme_get_transport_statistics", 00:04:51.816 "bdev_nvme_apply_firmware", 00:04:51.816 "bdev_nvme_detach_controller", 00:04:51.816 "bdev_nvme_get_controllers", 00:04:51.816 "bdev_nvme_attach_controller", 00:04:51.816 "bdev_nvme_set_hotplug", 00:04:51.816 "bdev_nvme_set_options", 00:04:51.816 "bdev_passthru_delete", 00:04:51.816 "bdev_passthru_create", 00:04:51.816 "bdev_lvol_set_parent_bdev", 00:04:51.816 "bdev_lvol_set_parent", 00:04:51.816 "bdev_lvol_check_shallow_copy", 00:04:51.816 "bdev_lvol_start_shallow_copy", 00:04:51.816 "bdev_lvol_grow_lvstore", 00:04:51.816 "bdev_lvol_get_lvols", 00:04:51.816 "bdev_lvol_get_lvstores", 00:04:51.816 "bdev_lvol_delete", 00:04:51.816 "bdev_lvol_set_read_only", 00:04:51.816 "bdev_lvol_resize", 00:04:51.816 "bdev_lvol_decouple_parent", 00:04:51.816 "bdev_lvol_inflate", 00:04:51.816 "bdev_lvol_rename", 00:04:51.816 "bdev_lvol_clone_bdev", 00:04:51.816 "bdev_lvol_clone", 00:04:51.816 "bdev_lvol_snapshot", 00:04:51.816 "bdev_lvol_create", 00:04:51.816 "bdev_lvol_delete_lvstore", 00:04:51.816 "bdev_lvol_rename_lvstore", 00:04:51.816 "bdev_lvol_create_lvstore", 00:04:51.816 "bdev_raid_set_options", 00:04:51.816 "bdev_raid_remove_base_bdev", 00:04:51.816 "bdev_raid_add_base_bdev", 00:04:51.816 "bdev_raid_delete", 00:04:51.816 "bdev_raid_create", 00:04:51.816 "bdev_raid_get_bdevs", 00:04:51.816 "bdev_error_inject_error", 00:04:51.816 "bdev_error_delete", 00:04:51.816 "bdev_error_create", 00:04:51.816 "bdev_split_delete", 00:04:51.816 "bdev_split_create", 00:04:51.816 "bdev_delay_delete", 00:04:51.816 "bdev_delay_create", 00:04:51.816 "bdev_delay_update_latency", 00:04:51.816 "bdev_zone_block_delete", 00:04:51.816 "bdev_zone_block_create", 00:04:51.816 "blobfs_create", 00:04:51.816 "blobfs_detect", 00:04:51.816 "blobfs_set_cache_size", 00:04:51.816 "bdev_aio_delete", 00:04:51.816 "bdev_aio_rescan", 00:04:51.816 "bdev_aio_create", 00:04:51.816 "bdev_ftl_set_property", 00:04:51.816 "bdev_ftl_get_properties", 00:04:51.816 "bdev_ftl_get_stats", 00:04:51.816 "bdev_ftl_unmap", 00:04:51.816 "bdev_ftl_unload", 00:04:51.816 "bdev_ftl_delete", 00:04:51.816 "bdev_ftl_load", 00:04:51.816 "bdev_ftl_create", 00:04:51.816 "bdev_virtio_attach_controller", 00:04:51.816 "bdev_virtio_scsi_get_devices", 00:04:51.816 "bdev_virtio_detach_controller", 00:04:51.816 "bdev_virtio_blk_set_hotplug", 00:04:51.816 "bdev_iscsi_delete", 00:04:51.816 "bdev_iscsi_create", 00:04:51.816 "bdev_iscsi_set_options", 00:04:51.816 "accel_error_inject_error", 00:04:51.816 "ioat_scan_accel_module", 00:04:51.816 "dsa_scan_accel_module", 00:04:51.816 "iaa_scan_accel_module", 00:04:51.816 "vfu_virtio_create_fs_endpoint", 00:04:51.816 "vfu_virtio_create_scsi_endpoint", 00:04:51.816 "vfu_virtio_scsi_remove_target", 00:04:51.816 "vfu_virtio_scsi_add_target", 00:04:51.816 "vfu_virtio_create_blk_endpoint", 00:04:51.816 "vfu_virtio_delete_endpoint", 00:04:51.816 "keyring_file_remove_key", 00:04:51.816 "keyring_file_add_key", 00:04:51.816 "keyring_linux_set_options", 00:04:51.816 "fsdev_aio_delete", 00:04:51.816 "fsdev_aio_create", 00:04:51.816 "iscsi_get_histogram", 00:04:51.816 "iscsi_enable_histogram", 00:04:51.816 "iscsi_set_options", 00:04:51.816 "iscsi_get_auth_groups", 00:04:51.816 "iscsi_auth_group_remove_secret", 00:04:51.816 "iscsi_auth_group_add_secret", 00:04:51.816 "iscsi_delete_auth_group", 00:04:51.816 "iscsi_create_auth_group", 00:04:51.816 "iscsi_set_discovery_auth", 00:04:51.816 "iscsi_get_options", 00:04:51.816 "iscsi_target_node_request_logout", 00:04:51.816 "iscsi_target_node_set_redirect", 00:04:51.816 "iscsi_target_node_set_auth", 00:04:51.816 "iscsi_target_node_add_lun", 00:04:51.816 "iscsi_get_stats", 00:04:51.816 "iscsi_get_connections", 00:04:51.816 "iscsi_portal_group_set_auth", 00:04:51.816 "iscsi_start_portal_group", 00:04:51.816 "iscsi_delete_portal_group", 00:04:51.816 "iscsi_create_portal_group", 00:04:51.816 "iscsi_get_portal_groups", 00:04:51.816 "iscsi_delete_target_node", 00:04:51.816 "iscsi_target_node_remove_pg_ig_maps", 00:04:51.816 "iscsi_target_node_add_pg_ig_maps", 00:04:51.816 "iscsi_create_target_node", 00:04:51.816 "iscsi_get_target_nodes", 00:04:51.816 "iscsi_delete_initiator_group", 00:04:51.816 "iscsi_initiator_group_remove_initiators", 00:04:51.816 "iscsi_initiator_group_add_initiators", 00:04:51.816 "iscsi_create_initiator_group", 00:04:51.816 "iscsi_get_initiator_groups", 00:04:51.816 "nvmf_set_crdt", 00:04:51.816 "nvmf_set_config", 00:04:51.816 "nvmf_set_max_subsystems", 00:04:51.816 "nvmf_stop_mdns_prr", 00:04:51.816 "nvmf_publish_mdns_prr", 00:04:51.816 "nvmf_subsystem_get_listeners", 00:04:51.816 "nvmf_subsystem_get_qpairs", 00:04:51.816 "nvmf_subsystem_get_controllers", 00:04:51.816 "nvmf_get_stats", 00:04:51.816 "nvmf_get_transports", 00:04:51.816 "nvmf_create_transport", 00:04:51.816 "nvmf_get_targets", 00:04:51.816 "nvmf_delete_target", 00:04:51.816 "nvmf_create_target", 00:04:51.816 "nvmf_subsystem_allow_any_host", 00:04:51.816 "nvmf_subsystem_set_keys", 00:04:51.816 "nvmf_subsystem_remove_host", 00:04:51.816 "nvmf_subsystem_add_host", 00:04:51.816 "nvmf_ns_remove_host", 00:04:51.816 "nvmf_ns_add_host", 00:04:51.816 "nvmf_subsystem_remove_ns", 00:04:51.816 "nvmf_subsystem_set_ns_ana_group", 00:04:51.816 "nvmf_subsystem_add_ns", 00:04:51.816 "nvmf_subsystem_listener_set_ana_state", 00:04:51.816 "nvmf_discovery_get_referrals", 00:04:51.816 "nvmf_discovery_remove_referral", 00:04:51.816 "nvmf_discovery_add_referral", 00:04:51.816 "nvmf_subsystem_remove_listener", 00:04:51.816 "nvmf_subsystem_add_listener", 00:04:51.816 "nvmf_delete_subsystem", 00:04:51.816 "nvmf_create_subsystem", 00:04:51.816 "nvmf_get_subsystems", 00:04:51.816 "env_dpdk_get_mem_stats", 00:04:51.816 "nbd_get_disks", 00:04:51.816 "nbd_stop_disk", 00:04:51.816 "nbd_start_disk", 00:04:51.816 "ublk_recover_disk", 00:04:51.816 "ublk_get_disks", 00:04:51.816 "ublk_stop_disk", 00:04:51.816 "ublk_start_disk", 00:04:51.816 "ublk_destroy_target", 00:04:51.816 "ublk_create_target", 00:04:51.816 "virtio_blk_create_transport", 00:04:51.816 "virtio_blk_get_transports", 00:04:51.816 "vhost_controller_set_coalescing", 00:04:51.816 "vhost_get_controllers", 00:04:51.817 "vhost_delete_controller", 00:04:51.817 "vhost_create_blk_controller", 00:04:51.817 "vhost_scsi_controller_remove_target", 00:04:51.817 "vhost_scsi_controller_add_target", 00:04:51.817 "vhost_start_scsi_controller", 00:04:51.817 "vhost_create_scsi_controller", 00:04:51.817 "thread_set_cpumask", 00:04:51.817 "scheduler_set_options", 00:04:51.817 "framework_get_governor", 00:04:51.817 "framework_get_scheduler", 00:04:51.817 "framework_set_scheduler", 00:04:51.817 "framework_get_reactors", 00:04:51.817 "thread_get_io_channels", 00:04:51.817 "thread_get_pollers", 00:04:51.817 "thread_get_stats", 00:04:51.817 "framework_monitor_context_switch", 00:04:51.817 "spdk_kill_instance", 00:04:51.817 "log_enable_timestamps", 00:04:51.817 "log_get_flags", 00:04:51.817 "log_clear_flag", 00:04:51.817 "log_set_flag", 00:04:51.817 "log_get_level", 00:04:51.817 "log_set_level", 00:04:51.817 "log_get_print_level", 00:04:51.817 "log_set_print_level", 00:04:51.817 "framework_enable_cpumask_locks", 00:04:51.817 "framework_disable_cpumask_locks", 00:04:51.817 "framework_wait_init", 00:04:51.817 "framework_start_init", 00:04:51.817 "scsi_get_devices", 00:04:51.817 "bdev_get_histogram", 00:04:51.817 "bdev_enable_histogram", 00:04:51.817 "bdev_set_qos_limit", 00:04:51.817 "bdev_set_qd_sampling_period", 00:04:51.817 "bdev_get_bdevs", 00:04:51.817 "bdev_reset_iostat", 00:04:51.817 "bdev_get_iostat", 00:04:51.817 "bdev_examine", 00:04:51.817 "bdev_wait_for_examine", 00:04:51.817 "bdev_set_options", 00:04:51.817 "accel_get_stats", 00:04:51.817 "accel_set_options", 00:04:51.817 "accel_set_driver", 00:04:51.817 "accel_crypto_key_destroy", 00:04:51.817 "accel_crypto_keys_get", 00:04:51.817 "accel_crypto_key_create", 00:04:51.817 "accel_assign_opc", 00:04:51.817 "accel_get_module_info", 00:04:51.817 "accel_get_opc_assignments", 00:04:51.817 "vmd_rescan", 00:04:51.817 "vmd_remove_device", 00:04:51.817 "vmd_enable", 00:04:51.817 "sock_get_default_impl", 00:04:51.817 "sock_set_default_impl", 00:04:51.817 "sock_impl_set_options", 00:04:51.817 "sock_impl_get_options", 00:04:51.817 "iobuf_get_stats", 00:04:51.817 "iobuf_set_options", 00:04:51.817 "keyring_get_keys", 00:04:51.817 "vfu_tgt_set_base_path", 00:04:51.817 "framework_get_pci_devices", 00:04:51.817 "framework_get_config", 00:04:51.817 "framework_get_subsystems", 00:04:51.817 "fsdev_set_opts", 00:04:51.817 "fsdev_get_opts", 00:04:51.817 "trace_get_info", 00:04:51.817 "trace_get_tpoint_group_mask", 00:04:51.817 "trace_disable_tpoint_group", 00:04:51.817 "trace_enable_tpoint_group", 00:04:51.817 "trace_clear_tpoint_mask", 00:04:51.817 "trace_set_tpoint_mask", 00:04:51.817 "notify_get_notifications", 00:04:51.817 "notify_get_types", 00:04:51.817 "spdk_get_version", 00:04:51.817 "rpc_get_methods" 00:04:51.817 ] 00:04:51.817 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:51.817 10:45:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.817 10:45:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.078 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.078 10:45:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1601456 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1601456 ']' 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1601456 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601456 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601456' 00:04:52.078 killing process with pid 1601456 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1601456 00:04:52.078 10:45:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1601456 00:04:52.339 00:04:52.339 real 0m1.546s 00:04:52.339 user 0m2.623s 00:04:52.339 sys 0m0.490s 00:04:52.339 10:45:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.339 10:45:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.339 ************************************ 00:04:52.339 END TEST spdkcli_tcp 00:04:52.339 ************************************ 00:04:52.339 10:45:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:52.339 10:45:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.339 10:45:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.339 10:45:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.339 ************************************ 00:04:52.339 START TEST dpdk_mem_utility 00:04:52.339 ************************************ 00:04:52.339 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:52.339 * Looking for test storage... 00:04:52.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:52.339 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.339 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.339 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.600 10:45:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.600 --rc genhtml_branch_coverage=1 00:04:52.600 --rc genhtml_function_coverage=1 00:04:52.600 --rc genhtml_legend=1 00:04:52.600 --rc geninfo_all_blocks=1 00:04:52.600 --rc geninfo_unexecuted_blocks=1 00:04:52.600 00:04:52.600 ' 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.600 --rc genhtml_branch_coverage=1 00:04:52.600 --rc genhtml_function_coverage=1 00:04:52.600 --rc genhtml_legend=1 00:04:52.600 --rc geninfo_all_blocks=1 00:04:52.600 --rc geninfo_unexecuted_blocks=1 00:04:52.600 00:04:52.600 ' 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.600 --rc genhtml_branch_coverage=1 00:04:52.600 --rc genhtml_function_coverage=1 00:04:52.600 --rc genhtml_legend=1 00:04:52.600 --rc geninfo_all_blocks=1 00:04:52.600 --rc geninfo_unexecuted_blocks=1 00:04:52.600 00:04:52.600 ' 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.600 --rc genhtml_branch_coverage=1 00:04:52.600 --rc genhtml_function_coverage=1 00:04:52.600 --rc genhtml_legend=1 00:04:52.600 --rc geninfo_all_blocks=1 00:04:52.600 --rc geninfo_unexecuted_blocks=1 00:04:52.600 00:04:52.600 ' 00:04:52.600 10:45:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:52.600 10:45:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1601834 00:04:52.600 10:45:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1601834 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1601834 ']' 00:04:52.600 10:45:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.600 10:45:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.600 [2024-10-09 10:45:12.424620] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:52.600 [2024-10-09 10:45:12.424693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601834 ] 00:04:52.600 [2024-10-09 10:45:12.558704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:52.600 [2024-10-09 10:45:12.591014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.860 [2024-10-09 10:45:12.614566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.431 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.431 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:53.431 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:53.431 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:53.431 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.431 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.431 { 00:04:53.431 "filename": "/tmp/spdk_mem_dump.txt" 00:04:53.431 } 00:04:53.431 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.431 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.431 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:53.431 1 heaps totaling size 810.000000 MiB 00:04:53.431 size: 810.000000 MiB heap id: 0 00:04:53.431 end heaps---------- 00:04:53.431 9 mempools totaling size 595.772034 MiB 00:04:53.431 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:53.431 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:53.431 size: 92.545471 MiB name: bdev_io_1601834 00:04:53.431 size: 50.003479 MiB name: msgpool_1601834 00:04:53.431 size: 36.509338 MiB name: fsdev_io_1601834 00:04:53.431 size: 21.763794 MiB name: PDU_Pool 00:04:53.431 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:53.431 size: 4.133484 MiB name: evtpool_1601834 00:04:53.431 size: 0.026123 MiB name: Session_Pool 00:04:53.431 end mempools------- 00:04:53.431 6 memzones totaling size 4.142822 MiB 00:04:53.431 size: 1.000366 MiB name: RG_ring_0_1601834 00:04:53.431 size: 1.000366 MiB name: RG_ring_1_1601834 00:04:53.431 size: 1.000366 MiB name: RG_ring_4_1601834 00:04:53.431 size: 1.000366 MiB name: RG_ring_5_1601834 00:04:53.431 size: 0.125366 MiB name: RG_ring_2_1601834 00:04:53.431 size: 0.015991 MiB name: RG_ring_3_1601834 00:04:53.431 end memzones------- 00:04:53.431 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:53.431 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:53.431 list of free elements. size: 10.737488 MiB 00:04:53.431 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:53.431 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:53.431 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:53.431 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:53.431 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:53.431 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:53.431 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:53.431 element at address: 0x200000200000 with size: 0.592346 MiB 00:04:53.431 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:53.431 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:53.431 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:53.431 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:53.431 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:53.431 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:53.431 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:53.431 list of standard malloc elements. size: 199.343628 MiB 00:04:53.431 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:53.431 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:53.431 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:53.431 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:53.431 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:53.431 element at address: 0x2000003b9f00 with size: 0.265747 MiB 00:04:53.431 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:53.431 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:53.431 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:53.431 element at address: 0x2000002b7c40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000003b9e40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:53.431 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:53.431 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:53.431 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:53.431 list of memzone associated elements. size: 599.918884 MiB 00:04:53.431 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:53.431 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:53.431 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:53.431 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:53.431 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:53.431 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1601834_0 00:04:53.431 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:53.431 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1601834_0 00:04:53.431 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:53.431 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1601834_0 00:04:53.431 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:53.431 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:53.431 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:53.431 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:53.431 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:53.431 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1601834_0 00:04:53.431 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:53.431 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1601834 00:04:53.431 element at address: 0x2000002b7d00 with size: 1.008118 MiB 00:04:53.431 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1601834 00:04:53.431 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:53.431 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:53.431 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:53.431 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:53.431 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:53.431 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:53.431 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:53.431 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:53.431 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:53.431 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1601834 00:04:53.431 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:53.431 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1601834 00:04:53.431 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:53.431 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1601834 00:04:53.431 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:53.431 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1601834 00:04:53.431 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:53.431 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1601834 00:04:53.431 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:53.431 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1601834 00:04:53.431 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:53.431 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:53.431 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:53.431 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:53.432 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:53.432 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:53.432 element at address: 0x200000297a40 with size: 0.125488 MiB 00:04:53.432 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1601834 00:04:53.432 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:53.432 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1601834 00:04:53.432 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:53.432 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:53.432 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:53.432 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:53.432 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:53.432 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1601834 00:04:53.432 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:53.432 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:53.432 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:53.432 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1601834 00:04:53.432 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:53.432 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1601834 00:04:53.432 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:53.432 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1601834 00:04:53.432 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:53.432 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:53.432 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:53.432 10:45:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1601834 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1601834 ']' 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1601834 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601834 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601834' 00:04:53.432 killing process with pid 1601834 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1601834 00:04:53.432 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1601834 00:04:53.691 00:04:53.691 real 0m1.423s 00:04:53.691 user 0m1.423s 00:04:53.691 sys 0m0.402s 00:04:53.692 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.692 10:45:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.692 ************************************ 00:04:53.692 END TEST dpdk_mem_utility 00:04:53.692 ************************************ 00:04:53.692 10:45:13 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:53.692 10:45:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.692 10:45:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.692 10:45:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.692 ************************************ 00:04:53.692 START TEST event 00:04:53.692 ************************************ 00:04:53.692 10:45:13 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:53.953 * Looking for test storage... 00:04:53.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.953 10:45:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.953 10:45:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.953 10:45:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.953 10:45:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.953 10:45:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.953 10:45:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.953 10:45:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.953 10:45:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.953 10:45:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.953 10:45:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.953 10:45:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.953 10:45:13 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.953 10:45:13 event -- scripts/common.sh@345 -- # : 1 00:04:53.953 10:45:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.953 10:45:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.953 10:45:13 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.953 10:45:13 event -- scripts/common.sh@353 -- # local d=1 00:04:53.953 10:45:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.953 10:45:13 event -- scripts/common.sh@355 -- # echo 1 00:04:53.953 10:45:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.953 10:45:13 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.953 10:45:13 event -- scripts/common.sh@353 -- # local d=2 00:04:53.953 10:45:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.953 10:45:13 event -- scripts/common.sh@355 -- # echo 2 00:04:53.953 10:45:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.953 10:45:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.953 10:45:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.953 10:45:13 event -- scripts/common.sh@368 -- # return 0 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.953 --rc genhtml_branch_coverage=1 00:04:53.953 --rc genhtml_function_coverage=1 00:04:53.953 --rc genhtml_legend=1 00:04:53.953 --rc geninfo_all_blocks=1 00:04:53.953 --rc geninfo_unexecuted_blocks=1 00:04:53.953 00:04:53.953 ' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.953 --rc genhtml_branch_coverage=1 00:04:53.953 --rc genhtml_function_coverage=1 00:04:53.953 --rc genhtml_legend=1 00:04:53.953 --rc geninfo_all_blocks=1 00:04:53.953 --rc geninfo_unexecuted_blocks=1 00:04:53.953 00:04:53.953 ' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.953 --rc genhtml_branch_coverage=1 00:04:53.953 --rc genhtml_function_coverage=1 00:04:53.953 --rc genhtml_legend=1 00:04:53.953 --rc geninfo_all_blocks=1 00:04:53.953 --rc geninfo_unexecuted_blocks=1 00:04:53.953 00:04:53.953 ' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.953 --rc genhtml_branch_coverage=1 00:04:53.953 --rc genhtml_function_coverage=1 00:04:53.953 --rc genhtml_legend=1 00:04:53.953 --rc geninfo_all_blocks=1 00:04:53.953 --rc geninfo_unexecuted_blocks=1 00:04:53.953 00:04:53.953 ' 00:04:53.953 10:45:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:53.953 10:45:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.953 10:45:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:53.953 10:45:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.953 10:45:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.953 ************************************ 00:04:53.953 START TEST event_perf 00:04:53.953 ************************************ 00:04:53.953 10:45:13 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.953 Running I/O for 1 seconds...[2024-10-09 10:45:13.929427] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:53.953 [2024-10-09 10:45:13.929538] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602152 ] 00:04:54.214 [2024-10-09 10:45:14.066749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.214 [2024-10-09 10:45:14.097515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.214 [2024-10-09 10:45:14.119304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.214 [2024-10-09 10:45:14.119443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.214 [2024-10-09 10:45:14.119600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.214 Running I/O for 1 seconds...[2024-10-09 10:45:14.119600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.155 00:04:55.155 lcore 0: 175803 00:04:55.155 lcore 1: 175803 00:04:55.155 lcore 2: 175800 00:04:55.155 lcore 3: 175803 00:04:55.155 done. 00:04:55.155 00:04:55.155 real 0m1.235s 00:04:55.155 user 0m4.054s 00:04:55.155 sys 0m0.073s 00:04:55.155 10:45:15 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.155 10:45:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.155 ************************************ 00:04:55.155 END TEST event_perf 00:04:55.155 ************************************ 00:04:55.415 10:45:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.415 10:45:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:55.415 10:45:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.415 10:45:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.415 ************************************ 00:04:55.415 START TEST event_reactor 00:04:55.415 ************************************ 00:04:55.415 10:45:15 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.415 [2024-10-09 10:45:15.243461] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:55.415 [2024-10-09 10:45:15.243579] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602337 ] 00:04:55.415 [2024-10-09 10:45:15.378339] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:55.415 [2024-10-09 10:45:15.409488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.676 [2024-10-09 10:45:15.427415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.617 test_start 00:04:56.617 oneshot 00:04:56.617 tick 100 00:04:56.617 tick 100 00:04:56.617 tick 250 00:04:56.617 tick 100 00:04:56.617 tick 100 00:04:56.617 tick 250 00:04:56.617 tick 100 00:04:56.617 tick 500 00:04:56.617 tick 100 00:04:56.617 tick 100 00:04:56.617 tick 250 00:04:56.617 tick 100 00:04:56.617 tick 100 00:04:56.617 test_end 00:04:56.617 00:04:56.617 real 0m1.225s 00:04:56.617 user 0m1.049s 00:04:56.617 sys 0m0.072s 00:04:56.617 10:45:16 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.617 10:45:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.617 ************************************ 00:04:56.617 END TEST event_reactor 00:04:56.617 ************************************ 00:04:56.617 10:45:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.617 10:45:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:56.617 10:45:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.617 10:45:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.617 ************************************ 00:04:56.617 START TEST event_reactor_perf 00:04:56.617 ************************************ 00:04:56.617 10:45:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.617 [2024-10-09 10:45:16.545379] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:56.617 [2024-10-09 10:45:16.545463] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602672 ] 00:04:56.877 [2024-10-09 10:45:16.679754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.877 [2024-10-09 10:45:16.712913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.877 [2024-10-09 10:45:16.732674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.817 test_start 00:04:57.817 test_end 00:04:57.817 Performance: 369422 events per second 00:04:57.817 00:04:57.817 real 0m1.230s 00:04:57.817 user 0m1.052s 00:04:57.817 sys 0m0.072s 00:04:57.817 10:45:17 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.817 10:45:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.817 ************************************ 00:04:57.817 END TEST event_reactor_perf 00:04:57.817 ************************************ 00:04:57.817 10:45:17 event -- event/event.sh@49 -- # uname -s 00:04:57.817 10:45:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:57.817 10:45:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:57.817 10:45:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.817 10:45:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.817 10:45:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.077 ************************************ 00:04:58.077 START TEST event_scheduler 00:04:58.077 ************************************ 00:04:58.077 10:45:17 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:58.077 * Looking for test storage... 00:04:58.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:58.077 10:45:17 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.077 10:45:17 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.077 10:45:17 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.077 10:45:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.077 10:45:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.078 10:45:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.078 10:45:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.078 --rc genhtml_branch_coverage=1 00:04:58.078 --rc genhtml_function_coverage=1 00:04:58.078 --rc genhtml_legend=1 00:04:58.078 --rc geninfo_all_blocks=1 00:04:58.078 --rc geninfo_unexecuted_blocks=1 00:04:58.078 00:04:58.078 ' 00:04:58.078 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.078 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1603060 00:04:58.078 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.078 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.078 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1603060 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1603060 ']' 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.078 10:45:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.078 [2024-10-09 10:45:18.070801] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:04:58.078 [2024-10-09 10:45:18.070873] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603060 ] 00:04:58.338 [2024-10-09 10:45:18.209499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:58.338 [2024-10-09 10:45:18.234855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.338 [2024-10-09 10:45:18.258743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.338 [2024-10-09 10:45:18.258958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.338 [2024-10-09 10:45:18.259111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.338 [2024-10-09 10:45:18.259111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.907 10:45:18 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.907 10:45:18 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:58.907 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.907 10:45:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.907 10:45:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.167 [2024-10-09 10:45:18.911720] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:59.167 [2024-10-09 10:45:18.911734] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.167 [2024-10-09 10:45:18.911740] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.167 [2024-10-09 10:45:18.911745] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.167 [2024-10-09 10:45:18.911748] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.167 10:45:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.167 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.167 10:45:18 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.167 10:45:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.167 [2024-10-09 10:45:18.965863] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.167 10:45:18 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.168 10:45:18 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.168 10:45:18 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.168 10:45:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 ************************************ 00:04:59.168 START TEST scheduler_create_thread 00:04:59.168 ************************************ 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 2 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 3 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 4 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 5 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 6 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 7 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 8 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 9 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.168 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.738 10 00:04:59.738 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.738 10:45:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.738 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.738 10:45:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.121 10:45:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.121 10:45:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.121 10:45:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.121 10:45:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.121 10:45:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.062 10:45:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.062 10:45:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.062 10:45:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.062 10:45:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.632 10:45:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.632 10:45:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:02.632 10:45:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:02.632 10:45:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.632 10:45:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.571 10:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.571 00:05:03.571 real 0m4.216s 00:05:03.571 user 0m0.023s 00:05:03.571 sys 0m0.009s 00:05:03.571 10:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.571 10:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.571 ************************************ 00:05:03.571 END TEST scheduler_create_thread 00:05:03.571 ************************************ 00:05:03.571 10:45:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.571 10:45:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1603060 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1603060 ']' 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1603060 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603060 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603060' 00:05:03.571 killing process with pid 1603060 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1603060 00:05:03.571 10:45:23 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1603060 00:05:03.571 [2024-10-09 10:45:23.500480] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.831 00:05:03.831 real 0m5.815s 00:05:03.831 user 0m12.736s 00:05:03.831 sys 0m0.396s 00:05:03.831 10:45:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.831 10:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.831 ************************************ 00:05:03.831 END TEST event_scheduler 00:05:03.831 ************************************ 00:05:03.831 10:45:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.831 10:45:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.831 10:45:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.831 10:45:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.831 10:45:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.831 ************************************ 00:05:03.831 START TEST app_repeat 00:05:03.831 ************************************ 00:05:03.831 10:45:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1604207 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1604207' 00:05:03.831 Process app_repeat pid: 1604207 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.831 spdk_app_start Round 0 00:05:03.831 10:45:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1604207 /var/tmp/spdk-nbd.sock 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1604207 ']' 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.832 10:45:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.832 [2024-10-09 10:45:23.772368] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:03.832 [2024-10-09 10:45:23.772440] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604207 ] 00:05:04.092 [2024-10-09 10:45:23.905995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:04.092 [2024-10-09 10:45:23.939378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.092 [2024-10-09 10:45:23.964384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.092 [2024-10-09 10:45:23.964388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.661 10:45:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.662 10:45:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:04.662 10:45:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.921 Malloc0 00:05:04.921 10:45:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.181 Malloc1 00:05:05.182 10:45:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.182 10:45:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.182 /dev/nbd0 00:05:05.182 10:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.182 10:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.182 1+0 records in 00:05:05.182 1+0 records out 00:05:05.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285378 s, 14.4 MB/s 00:05:05.182 10:45:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.442 /dev/nbd1 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.442 1+0 records in 00:05:05.442 1+0 records out 00:05:05.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310267 s, 13.2 MB/s 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:05.442 10:45:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.442 10:45:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.703 { 00:05:05.703 "nbd_device": "/dev/nbd0", 00:05:05.703 "bdev_name": "Malloc0" 00:05:05.703 }, 00:05:05.703 { 00:05:05.703 "nbd_device": "/dev/nbd1", 00:05:05.703 "bdev_name": "Malloc1" 00:05:05.703 } 00:05:05.703 ]' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.703 { 00:05:05.703 "nbd_device": "/dev/nbd0", 00:05:05.703 "bdev_name": "Malloc0" 00:05:05.703 }, 00:05:05.703 { 00:05:05.703 "nbd_device": "/dev/nbd1", 00:05:05.703 "bdev_name": "Malloc1" 00:05:05.703 } 00:05:05.703 ]' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.703 /dev/nbd1' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.703 /dev/nbd1' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.703 256+0 records in 00:05:05.703 256+0 records out 00:05:05.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115898 s, 90.5 MB/s 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.703 256+0 records in 00:05:05.703 256+0 records out 00:05:05.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163966 s, 64.0 MB/s 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.703 10:45:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.976 256+0 records in 00:05:05.976 256+0 records out 00:05:05.976 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178078 s, 58.9 MB/s 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.976 10:45:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.977 10:45:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.241 10:45:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.501 10:45:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.501 10:45:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.762 10:45:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.762 [2024-10-09 10:45:26.607013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.762 [2024-10-09 10:45:26.624931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.762 [2024-10-09 10:45:26.624934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.762 [2024-10-09 10:45:26.656675] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.762 [2024-10-09 10:45:26.656713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.057 10:45:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.057 10:45:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:10.057 spdk_app_start Round 1 00:05:10.057 10:45:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1604207 /var/tmp/spdk-nbd.sock 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1604207 ']' 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.057 10:45:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:10.058 10:45:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.058 Malloc0 00:05:10.058 10:45:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.058 Malloc1 00:05:10.058 10:45:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.058 10:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.319 /dev/nbd0 00:05:10.319 10:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.319 10:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.319 1+0 records in 00:05:10.319 1+0 records out 00:05:10.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248031 s, 16.5 MB/s 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.319 10:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.319 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.319 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.319 10:45:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.581 /dev/nbd1 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.581 1+0 records in 00:05:10.581 1+0 records out 00:05:10.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271507 s, 15.1 MB/s 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:10.581 10:45:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.581 10:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.842 { 00:05:10.842 "nbd_device": "/dev/nbd0", 00:05:10.842 "bdev_name": "Malloc0" 00:05:10.842 }, 00:05:10.842 { 00:05:10.842 "nbd_device": "/dev/nbd1", 00:05:10.842 "bdev_name": "Malloc1" 00:05:10.842 } 00:05:10.842 ]' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.842 { 00:05:10.842 "nbd_device": "/dev/nbd0", 00:05:10.842 "bdev_name": "Malloc0" 00:05:10.842 }, 00:05:10.842 { 00:05:10.842 "nbd_device": "/dev/nbd1", 00:05:10.842 "bdev_name": "Malloc1" 00:05:10.842 } 00:05:10.842 ]' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.842 /dev/nbd1' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.842 /dev/nbd1' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.842 256+0 records in 00:05:10.842 256+0 records out 00:05:10.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121632 s, 86.2 MB/s 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.842 256+0 records in 00:05:10.842 256+0 records out 00:05:10.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016457 s, 63.7 MB/s 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.842 256+0 records in 00:05:10.842 256+0 records out 00:05:10.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170188 s, 61.6 MB/s 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.842 10:45:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.103 10:45:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.364 10:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.638 10:45:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.638 10:45:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.638 10:45:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.897 [2024-10-09 10:45:31.683079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.897 [2024-10-09 10:45:31.701069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.897 [2024-10-09 10:45:31.701072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.897 [2024-10-09 10:45:31.733416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.897 [2024-10-09 10:45:31.733454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.192 10:45:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.192 10:45:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:15.192 spdk_app_start Round 2 00:05:15.192 10:45:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1604207 /var/tmp/spdk-nbd.sock 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1604207 ']' 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.192 10:45:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:15.192 10:45:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.192 Malloc0 00:05:15.192 10:45:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.192 Malloc1 00:05:15.192 10:45:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.192 10:45:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.193 10:45:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.453 /dev/nbd0 00:05:15.453 10:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.453 10:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.453 1+0 records in 00:05:15.453 1+0 records out 00:05:15.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293608 s, 14.0 MB/s 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.453 10:45:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.453 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.453 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.453 10:45:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.713 /dev/nbd1 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.713 1+0 records in 00:05:15.713 1+0 records out 00:05:15.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245645 s, 16.7 MB/s 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.713 10:45:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.713 10:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.713 { 00:05:15.713 "nbd_device": "/dev/nbd0", 00:05:15.713 "bdev_name": "Malloc0" 00:05:15.713 }, 00:05:15.713 { 00:05:15.713 "nbd_device": "/dev/nbd1", 00:05:15.713 "bdev_name": "Malloc1" 00:05:15.713 } 00:05:15.713 ]' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.974 { 00:05:15.974 "nbd_device": "/dev/nbd0", 00:05:15.974 "bdev_name": "Malloc0" 00:05:15.974 }, 00:05:15.974 { 00:05:15.974 "nbd_device": "/dev/nbd1", 00:05:15.974 "bdev_name": "Malloc1" 00:05:15.974 } 00:05:15.974 ]' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.974 /dev/nbd1' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.974 /dev/nbd1' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.974 256+0 records in 00:05:15.974 256+0 records out 00:05:15.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01279 s, 82.0 MB/s 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.974 256+0 records in 00:05:15.974 256+0 records out 00:05:15.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165928 s, 63.2 MB/s 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.974 256+0 records in 00:05:15.974 256+0 records out 00:05:15.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177849 s, 59.0 MB/s 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.974 10:45:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.234 10:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.495 10:45:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.495 10:45:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.761 10:45:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.761 [2024-10-09 10:45:36.722958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.761 [2024-10-09 10:45:36.740586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.761 [2024-10-09 10:45:36.740609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.020 [2024-10-09 10:45:36.772989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.020 [2024-10-09 10:45:36.773025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.318 10:45:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1604207 /var/tmp/spdk-nbd.sock 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1604207 ']' 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:20.318 10:45:39 event.app_repeat -- event/event.sh@39 -- # killprocess 1604207 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1604207 ']' 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1604207 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604207 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604207' 00:05:20.318 killing process with pid 1604207 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1604207 00:05:20.318 10:45:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1604207 00:05:20.318 spdk_app_start is called in Round 0. 00:05:20.318 Shutdown signal received, stop current app iteration 00:05:20.318 Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 reinitialization... 00:05:20.318 spdk_app_start is called in Round 1. 00:05:20.318 Shutdown signal received, stop current app iteration 00:05:20.319 Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 reinitialization... 00:05:20.319 spdk_app_start is called in Round 2. 00:05:20.319 Shutdown signal received, stop current app iteration 00:05:20.319 Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 reinitialization... 00:05:20.319 spdk_app_start is called in Round 3. 00:05:20.319 Shutdown signal received, stop current app iteration 00:05:20.319 10:45:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.319 10:45:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.319 00:05:20.319 real 0m16.211s 00:05:20.319 user 0m35.293s 00:05:20.319 sys 0m2.254s 00:05:20.319 10:45:39 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.319 10:45:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 END TEST app_repeat 00:05:20.319 ************************************ 00:05:20.319 10:45:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.319 10:45:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.319 10:45:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.319 10:45:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.319 10:45:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 START TEST cpu_locks 00:05:20.319 ************************************ 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:20.319 * Looking for test storage... 00:05:20.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.319 10:45:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:20.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.319 --rc genhtml_branch_coverage=1 00:05:20.319 --rc genhtml_function_coverage=1 00:05:20.319 --rc genhtml_legend=1 00:05:20.319 --rc geninfo_all_blocks=1 00:05:20.319 --rc geninfo_unexecuted_blocks=1 00:05:20.319 00:05:20.319 ' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:20.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.319 --rc genhtml_branch_coverage=1 00:05:20.319 --rc genhtml_function_coverage=1 00:05:20.319 --rc genhtml_legend=1 00:05:20.319 --rc geninfo_all_blocks=1 00:05:20.319 --rc geninfo_unexecuted_blocks=1 00:05:20.319 00:05:20.319 ' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:20.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.319 --rc genhtml_branch_coverage=1 00:05:20.319 --rc genhtml_function_coverage=1 00:05:20.319 --rc genhtml_legend=1 00:05:20.319 --rc geninfo_all_blocks=1 00:05:20.319 --rc geninfo_unexecuted_blocks=1 00:05:20.319 00:05:20.319 ' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:20.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.319 --rc genhtml_branch_coverage=1 00:05:20.319 --rc genhtml_function_coverage=1 00:05:20.319 --rc genhtml_legend=1 00:05:20.319 --rc geninfo_all_blocks=1 00:05:20.319 --rc geninfo_unexecuted_blocks=1 00:05:20.319 00:05:20.319 ' 00:05:20.319 10:45:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:20.319 10:45:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:20.319 10:45:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:20.319 10:45:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.319 10:45:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 START TEST default_locks 00:05:20.319 ************************************ 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1607716 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1607716 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1607716 ']' 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.319 10:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.581 [2024-10-09 10:45:40.320776] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:20.581 [2024-10-09 10:45:40.320841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607716 ] 00:05:20.581 [2024-10-09 10:45:40.454190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:20.581 [2024-10-09 10:45:40.485625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.581 [2024-10-09 10:45:40.503770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.152 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.152 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:21.152 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1607716 00:05:21.152 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1607716 00:05:21.152 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.788 lslocks: write error 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1607716 ']' 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1607716' 00:05:21.788 killing process with pid 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1607716 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1607716 ']' 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1607716) - No such process 00:05:21.788 ERROR: process (pid: 1607716) is no longer running 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.788 00:05:21.788 real 0m1.489s 00:05:21.788 user 0m1.523s 00:05:21.788 sys 0m0.500s 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.788 10:45:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.788 ************************************ 00:05:21.788 END TEST default_locks 00:05:21.788 ************************************ 00:05:22.065 10:45:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:22.065 10:45:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.065 10:45:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.065 10:45:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.065 ************************************ 00:05:22.065 START TEST default_locks_via_rpc 00:05:22.065 ************************************ 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1608090 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1608090 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1608090 ']' 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.065 10:45:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.065 [2024-10-09 10:45:41.881839] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:22.065 [2024-10-09 10:45:41.881894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608090 ] 00:05:22.065 [2024-10-09 10:45:42.013216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.065 [2024-10-09 10:45:42.045268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.326 [2024-10-09 10:45:42.068722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1608090 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1608090 00:05:22.898 10:45:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1608090 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1608090 ']' 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1608090 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608090 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608090' 00:05:23.470 killing process with pid 1608090 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1608090 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1608090 00:05:23.470 00:05:23.470 real 0m1.611s 00:05:23.470 user 0m1.646s 00:05:23.470 sys 0m0.535s 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.470 10:45:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.470 ************************************ 00:05:23.470 END TEST default_locks_via_rpc 00:05:23.470 ************************************ 00:05:23.731 10:45:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:23.731 10:45:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.731 10:45:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.731 10:45:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.731 ************************************ 00:05:23.731 START TEST non_locking_app_on_locked_coremask 00:05:23.731 ************************************ 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1608460 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1608460 /var/tmp/spdk.sock 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1608460 ']' 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.731 10:45:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.731 [2024-10-09 10:45:43.583028] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:23.731 [2024-10-09 10:45:43.583082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608460 ] 00:05:23.731 [2024-10-09 10:45:43.715100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.992 [2024-10-09 10:45:43.747215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.992 [2024-10-09 10:45:43.770570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1608730 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1608730 /var/tmp/spdk2.sock 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1608730 ']' 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.563 10:45:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.563 [2024-10-09 10:45:44.411026] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:24.563 [2024-10-09 10:45:44.411083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1608730 ] 00:05:24.563 [2024-10-09 10:45:44.540705] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.824 [2024-10-09 10:45:44.599288] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.824 [2024-10-09 10:45:44.599311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.824 [2024-10-09 10:45:44.637526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.395 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.395 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:25.395 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1608460 00:05:25.395 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1608460 00:05:25.395 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.966 lslocks: write error 00:05:25.966 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1608460 00:05:25.966 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1608460 ']' 00:05:25.966 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1608460 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608460 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608460' 00:05:25.967 killing process with pid 1608460 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1608460 00:05:25.967 10:45:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1608460 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1608730 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1608730 ']' 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1608730 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1608730 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1608730' 00:05:26.536 killing process with pid 1608730 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1608730 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1608730 00:05:26.536 00:05:26.536 real 0m3.009s 00:05:26.536 user 0m3.214s 00:05:26.536 sys 0m0.941s 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.536 10:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.536 ************************************ 00:05:26.536 END TEST non_locking_app_on_locked_coremask 00:05:26.536 ************************************ 00:05:26.798 10:45:46 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:26.798 10:45:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.798 10:45:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.798 10:45:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.798 ************************************ 00:05:26.798 START TEST locking_app_on_unlocked_coremask 00:05:26.798 ************************************ 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1609163 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1609163 /var/tmp/spdk.sock 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1609163 ']' 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.798 10:45:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.798 [2024-10-09 10:45:46.657045] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:26.798 [2024-10-09 10:45:46.657098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609163 ] 00:05:26.798 [2024-10-09 10:45:46.789010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.059 [2024-10-09 10:45:46.821409] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.059 [2024-10-09 10:45:46.821438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.059 [2024-10-09 10:45:46.844615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1609290 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1609290 /var/tmp/spdk2.sock 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1609290 ']' 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.632 10:45:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.632 [2024-10-09 10:45:47.479115] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:27.632 [2024-10-09 10:45:47.479167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609290 ] 00:05:27.632 [2024-10-09 10:45:47.608497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.894 [2024-10-09 10:45:47.671753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.894 [2024-10-09 10:45:47.706072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.465 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.465 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:28.465 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1609290 00:05:28.465 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1609290 00:05:28.465 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.036 lslocks: write error 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1609163 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1609163 ']' 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1609163 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609163 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609163' 00:05:29.036 killing process with pid 1609163 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1609163 00:05:29.036 10:45:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1609163 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1609290 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1609290 ']' 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1609290 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.298 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609290 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609290' 00:05:29.559 killing process with pid 1609290 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1609290 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1609290 00:05:29.559 00:05:29.559 real 0m2.941s 00:05:29.559 user 0m3.161s 00:05:29.559 sys 0m0.877s 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.559 10:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.559 ************************************ 00:05:29.559 END TEST locking_app_on_unlocked_coremask 00:05:29.559 ************************************ 00:05:29.820 10:45:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.820 10:45:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.820 10:45:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.820 10:45:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 ************************************ 00:05:29.820 START TEST locking_app_on_locked_coremask 00:05:29.820 ************************************ 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1609870 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1609870 /var/tmp/spdk.sock 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1609870 ']' 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.820 10:45:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 [2024-10-09 10:45:49.677042] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:29.820 [2024-10-09 10:45:49.677094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609870 ] 00:05:29.820 [2024-10-09 10:45:49.808619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.081 [2024-10-09 10:45:49.841008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.081 [2024-10-09 10:45:49.863158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1609888 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1609888 /var/tmp/spdk2.sock 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1609888 /var/tmp/spdk2.sock 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1609888 /var/tmp/spdk2.sock 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1609888 ']' 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.652 10:45:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.652 [2024-10-09 10:45:50.512387] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:30.652 [2024-10-09 10:45:50.512440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609888 ] 00:05:30.652 [2024-10-09 10:45:50.644886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.913 [2024-10-09 10:45:50.703928] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1609870 has claimed it. 00:05:30.913 [2024-10-09 10:45:50.703964] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:31.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1609888) - No such process 00:05:31.174 ERROR: process (pid: 1609888) is no longer running 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1609870 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1609870 00:05:31.174 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.745 lslocks: write error 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1609870 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1609870 ']' 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1609870 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609870 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609870' 00:05:31.745 killing process with pid 1609870 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1609870 00:05:31.745 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1609870 00:05:32.006 00:05:32.006 real 0m2.245s 00:05:32.006 user 0m2.425s 00:05:32.006 sys 0m0.640s 00:05:32.006 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.006 10:45:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.006 ************************************ 00:05:32.006 END TEST locking_app_on_locked_coremask 00:05:32.006 ************************************ 00:05:32.006 10:45:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:32.006 10:45:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.006 10:45:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.006 10:45:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.006 ************************************ 00:05:32.006 START TEST locking_overlapped_coremask 00:05:32.006 ************************************ 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1610253 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1610253 /var/tmp/spdk.sock 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1610253 ']' 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.006 10:45:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.006 [2024-10-09 10:45:51.998201] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:32.006 [2024-10-09 10:45:51.998251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610253 ] 00:05:32.267 [2024-10-09 10:45:52.129277] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.267 [2024-10-09 10:45:52.162872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.267 [2024-10-09 10:45:52.183497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.267 [2024-10-09 10:45:52.183577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.267 [2024-10-09 10:45:52.183686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1610535 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1610535 /var/tmp/spdk2.sock 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1610535 /var/tmp/spdk2.sock 00:05:32.839 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1610535 /var/tmp/spdk2.sock 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1610535 ']' 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.840 10:45:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.101 [2024-10-09 10:45:52.847746] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:33.101 [2024-10-09 10:45:52.847799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610535 ] 00:05:33.101 [2024-10-09 10:45:52.980263] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.101 [2024-10-09 10:45:53.022079] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1610253 has claimed it. 00:05:33.101 [2024-10-09 10:45:53.022107] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1610535) - No such process 00:05:33.674 ERROR: process (pid: 1610535) is no longer running 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1610253 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1610253 ']' 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1610253 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610253 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610253' 00:05:33.674 killing process with pid 1610253 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1610253 00:05:33.674 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1610253 00:05:33.935 00:05:33.935 real 0m1.790s 00:05:33.935 user 0m4.932s 00:05:33.935 sys 0m0.399s 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 ************************************ 00:05:33.935 END TEST locking_overlapped_coremask 00:05:33.935 ************************************ 00:05:33.935 10:45:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.935 10:45:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.935 10:45:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.935 10:45:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 ************************************ 00:05:33.935 START TEST locking_overlapped_coremask_via_rpc 00:05:33.935 ************************************ 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1610624 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1610624 /var/tmp/spdk.sock 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1610624 ']' 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.935 10:45:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.935 [2024-10-09 10:45:53.861627] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:33.935 [2024-10-09 10:45:53.861672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610624 ] 00:05:34.196 [2024-10-09 10:45:53.994993] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.196 [2024-10-09 10:45:54.026855] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.196 [2024-10-09 10:45:54.026881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.196 [2024-10-09 10:45:54.047028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.196 [2024-10-09 10:45:54.047141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.196 [2024-10-09 10:45:54.047143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1610957 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1610957 /var/tmp/spdk2.sock 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1610957 ']' 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.768 10:45:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.028 [2024-10-09 10:45:54.780292] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:35.028 [2024-10-09 10:45:54.780345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1610957 ] 00:05:35.028 [2024-10-09 10:45:54.914001] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.028 [2024-10-09 10:45:54.960474] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.028 [2024-10-09 10:45:54.960493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.028 [2024-10-09 10:45:54.994648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.028 [2024-10-09 10:45:54.994765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.028 [2024-10-09 10:45:54.994768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.599 [2024-10-09 10:45:55.586526] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1610624 has claimed it. 00:05:35.599 request: 00:05:35.599 { 00:05:35.599 "method": "framework_enable_cpumask_locks", 00:05:35.599 "req_id": 1 00:05:35.599 } 00:05:35.599 Got JSON-RPC error response 00:05:35.599 response: 00:05:35.599 { 00:05:35.599 "code": -32603, 00:05:35.599 "message": "Failed to claim CPU core: 2" 00:05:35.599 } 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1610624 /var/tmp/spdk.sock 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1610624 ']' 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.599 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1610957 /var/tmp/spdk2.sock 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1610957 ']' 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.860 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.121 00:05:36.121 real 0m2.154s 00:05:36.121 user 0m0.919s 00:05:36.121 sys 0m0.156s 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.121 10:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.121 ************************************ 00:05:36.121 END TEST locking_overlapped_coremask_via_rpc 00:05:36.121 ************************************ 00:05:36.121 10:45:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.121 10:45:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1610624 ]] 00:05:36.121 10:45:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1610624 00:05:36.121 10:45:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1610624 ']' 00:05:36.121 10:45:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1610624 00:05:36.121 10:45:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:36.121 10:45:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.121 10:45:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610624 00:05:36.121 10:45:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.121 10:45:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.121 10:45:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610624' 00:05:36.121 killing process with pid 1610624 00:05:36.122 10:45:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1610624 00:05:36.122 10:45:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1610624 00:05:36.382 10:45:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1610957 ]] 00:05:36.382 10:45:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1610957 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1610957 ']' 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1610957 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1610957 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1610957' 00:05:36.382 killing process with pid 1610957 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1610957 00:05:36.382 10:45:56 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1610957 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1610624 ]] 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1610624 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1610624 ']' 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1610624 00:05:36.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1610624) - No such process 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1610624 is not found' 00:05:36.643 Process with pid 1610624 is not found 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1610957 ]] 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1610957 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1610957 ']' 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1610957 00:05:36.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1610957) - No such process 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1610957 is not found' 00:05:36.643 Process with pid 1610957 is not found 00:05:36.643 10:45:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.643 00:05:36.643 real 0m16.506s 00:05:36.643 user 0m27.831s 00:05:36.643 sys 0m4.990s 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.643 10:45:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.643 ************************************ 00:05:36.643 END TEST cpu_locks 00:05:36.643 ************************************ 00:05:36.643 00:05:36.643 real 0m42.897s 00:05:36.643 user 1m22.319s 00:05:36.643 sys 0m8.263s 00:05:36.643 10:45:56 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.643 10:45:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.643 ************************************ 00:05:36.643 END TEST event 00:05:36.643 ************************************ 00:05:36.643 10:45:56 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:36.643 10:45:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.643 10:45:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.643 10:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.643 ************************************ 00:05:36.643 START TEST thread 00:05:36.643 ************************************ 00:05:36.643 10:45:56 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:36.904 * Looking for test storage... 00:05:36.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.904 10:45:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.904 10:45:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.904 10:45:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.904 10:45:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.904 10:45:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.904 10:45:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.904 10:45:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.904 10:45:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.904 10:45:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.904 10:45:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.904 10:45:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.904 10:45:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:36.904 10:45:56 thread -- scripts/common.sh@345 -- # : 1 00:05:36.904 10:45:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.904 10:45:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.904 10:45:56 thread -- scripts/common.sh@365 -- # decimal 1 00:05:36.904 10:45:56 thread -- scripts/common.sh@353 -- # local d=1 00:05:36.904 10:45:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.904 10:45:56 thread -- scripts/common.sh@355 -- # echo 1 00:05:36.904 10:45:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.904 10:45:56 thread -- scripts/common.sh@366 -- # decimal 2 00:05:36.904 10:45:56 thread -- scripts/common.sh@353 -- # local d=2 00:05:36.904 10:45:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.904 10:45:56 thread -- scripts/common.sh@355 -- # echo 2 00:05:36.904 10:45:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.904 10:45:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.904 10:45:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.904 10:45:56 thread -- scripts/common.sh@368 -- # return 0 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.904 --rc genhtml_branch_coverage=1 00:05:36.904 --rc genhtml_function_coverage=1 00:05:36.904 --rc genhtml_legend=1 00:05:36.904 --rc geninfo_all_blocks=1 00:05:36.904 --rc geninfo_unexecuted_blocks=1 00:05:36.904 00:05:36.904 ' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.904 --rc genhtml_branch_coverage=1 00:05:36.904 --rc genhtml_function_coverage=1 00:05:36.904 --rc genhtml_legend=1 00:05:36.904 --rc geninfo_all_blocks=1 00:05:36.904 --rc geninfo_unexecuted_blocks=1 00:05:36.904 00:05:36.904 ' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.904 --rc genhtml_branch_coverage=1 00:05:36.904 --rc genhtml_function_coverage=1 00:05:36.904 --rc genhtml_legend=1 00:05:36.904 --rc geninfo_all_blocks=1 00:05:36.904 --rc geninfo_unexecuted_blocks=1 00:05:36.904 00:05:36.904 ' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.904 --rc genhtml_branch_coverage=1 00:05:36.904 --rc genhtml_function_coverage=1 00:05:36.904 --rc genhtml_legend=1 00:05:36.904 --rc geninfo_all_blocks=1 00:05:36.904 --rc geninfo_unexecuted_blocks=1 00:05:36.904 00:05:36.904 ' 00:05:36.904 10:45:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.904 10:45:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.904 ************************************ 00:05:36.904 START TEST thread_poller_perf 00:05:36.904 ************************************ 00:05:36.904 10:45:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.904 [2024-10-09 10:45:56.897167] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:36.904 [2024-10-09 10:45:56.897270] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611407 ] 00:05:37.165 [2024-10-09 10:45:57.033532] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.165 [2024-10-09 10:45:57.064066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.165 [2024-10-09 10:45:57.082037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.165 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:38.547 [2024-10-09T08:45:58.549Z] ====================================== 00:05:38.547 [2024-10-09T08:45:58.549Z] busy:2401938226 (cyc) 00:05:38.547 [2024-10-09T08:45:58.549Z] total_run_count: 286000 00:05:38.547 [2024-10-09T08:45:58.549Z] tsc_hz: 2394400000 (cyc) 00:05:38.547 [2024-10-09T08:45:58.549Z] ====================================== 00:05:38.547 [2024-10-09T08:45:58.549Z] poller_cost: 8398 (cyc), 3507 (nsec) 00:05:38.547 00:05:38.547 real 0m1.237s 00:05:38.547 user 0m1.062s 00:05:38.547 sys 0m0.071s 00:05:38.547 10:45:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.547 10:45:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.547 ************************************ 00:05:38.547 END TEST thread_poller_perf 00:05:38.547 ************************************ 00:05:38.547 10:45:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:38.547 10:45:58 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:38.547 10:45:58 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.547 10:45:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.547 ************************************ 00:05:38.547 START TEST thread_poller_perf 00:05:38.547 ************************************ 00:05:38.547 10:45:58 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:38.547 [2024-10-09 10:45:58.212522] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:38.547 [2024-10-09 10:45:58.212606] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611762 ] 00:05:38.547 [2024-10-09 10:45:58.347280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.547 [2024-10-09 10:45:58.379146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.547 [2024-10-09 10:45:58.396495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.547 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:39.486 [2024-10-09T08:45:59.488Z] ====================================== 00:05:39.486 [2024-10-09T08:45:59.488Z] busy:2396359846 (cyc) 00:05:39.486 [2024-10-09T08:45:59.488Z] total_run_count: 3800000 00:05:39.486 [2024-10-09T08:45:59.488Z] tsc_hz: 2394400000 (cyc) 00:05:39.486 [2024-10-09T08:45:59.488Z] ====================================== 00:05:39.486 [2024-10-09T08:45:59.488Z] poller_cost: 630 (cyc), 263 (nsec) 00:05:39.486 00:05:39.486 real 0m1.229s 00:05:39.486 user 0m1.057s 00:05:39.486 sys 0m0.069s 00:05:39.486 10:45:59 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.486 10:45:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.486 ************************************ 00:05:39.486 END TEST thread_poller_perf 00:05:39.486 ************************************ 00:05:39.486 10:45:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:39.486 00:05:39.486 real 0m2.821s 00:05:39.486 user 0m2.300s 00:05:39.486 sys 0m0.336s 00:05:39.486 10:45:59 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.486 10:45:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.486 ************************************ 00:05:39.486 END TEST thread 00:05:39.486 ************************************ 00:05:39.746 10:45:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:39.746 10:45:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:39.746 10:45:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.746 10:45:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.746 10:45:59 -- common/autotest_common.sh@10 -- # set +x 00:05:39.746 ************************************ 00:05:39.746 START TEST app_cmdline 00:05:39.746 ************************************ 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:39.746 * Looking for test storage... 00:05:39.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.746 10:45:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.746 --rc genhtml_branch_coverage=1 00:05:39.746 --rc genhtml_function_coverage=1 00:05:39.746 --rc genhtml_legend=1 00:05:39.746 --rc geninfo_all_blocks=1 00:05:39.746 --rc geninfo_unexecuted_blocks=1 00:05:39.746 00:05:39.746 ' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.746 --rc genhtml_branch_coverage=1 00:05:39.746 --rc genhtml_function_coverage=1 00:05:39.746 --rc genhtml_legend=1 00:05:39.746 --rc geninfo_all_blocks=1 00:05:39.746 --rc geninfo_unexecuted_blocks=1 00:05:39.746 00:05:39.746 ' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.746 --rc genhtml_branch_coverage=1 00:05:39.746 --rc genhtml_function_coverage=1 00:05:39.746 --rc genhtml_legend=1 00:05:39.746 --rc geninfo_all_blocks=1 00:05:39.746 --rc geninfo_unexecuted_blocks=1 00:05:39.746 00:05:39.746 ' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.746 --rc genhtml_branch_coverage=1 00:05:39.746 --rc genhtml_function_coverage=1 00:05:39.746 --rc genhtml_legend=1 00:05:39.746 --rc geninfo_all_blocks=1 00:05:39.746 --rc geninfo_unexecuted_blocks=1 00:05:39.746 00:05:39.746 ' 00:05:39.746 10:45:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:39.746 10:45:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1612162 00:05:39.746 10:45:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1612162 00:05:39.746 10:45:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1612162 ']' 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.746 10:45:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.007 [2024-10-09 10:45:59.795739] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:40.007 [2024-10-09 10:45:59.795795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612162 ] 00:05:40.007 [2024-10-09 10:45:59.927882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.007 [2024-10-09 10:45:59.961927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.007 [2024-10-09 10:45:59.984988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:40.946 { 00:05:40.946 "version": "SPDK v25.01-pre git sha1 a29d7fdf9", 00:05:40.946 "fields": { 00:05:40.946 "major": 25, 00:05:40.946 "minor": 1, 00:05:40.946 "patch": 0, 00:05:40.946 "suffix": "-pre", 00:05:40.946 "commit": "a29d7fdf9" 00:05:40.946 } 00:05:40.946 } 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:40.946 10:46:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:40.946 10:46:00 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:41.207 request: 00:05:41.207 { 00:05:41.207 "method": "env_dpdk_get_mem_stats", 00:05:41.207 "req_id": 1 00:05:41.207 } 00:05:41.207 Got JSON-RPC error response 00:05:41.207 response: 00:05:41.207 { 00:05:41.207 "code": -32601, 00:05:41.207 "message": "Method not found" 00:05:41.207 } 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.207 10:46:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1612162 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1612162 ']' 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1612162 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612162 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612162' 00:05:41.207 killing process with pid 1612162 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@969 -- # kill 1612162 00:05:41.207 10:46:01 app_cmdline -- common/autotest_common.sh@974 -- # wait 1612162 00:05:41.467 00:05:41.467 real 0m1.731s 00:05:41.467 user 0m1.987s 00:05:41.467 sys 0m0.463s 00:05:41.467 10:46:01 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.467 10:46:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:41.467 ************************************ 00:05:41.467 END TEST app_cmdline 00:05:41.467 ************************************ 00:05:41.467 10:46:01 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.467 10:46:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.467 10:46:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.467 10:46:01 -- common/autotest_common.sh@10 -- # set +x 00:05:41.467 ************************************ 00:05:41.467 START TEST version 00:05:41.467 ************************************ 00:05:41.467 10:46:01 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:41.467 * Looking for test storage... 00:05:41.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:41.467 10:46:01 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.467 10:46:01 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.467 10:46:01 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.728 10:46:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.728 10:46:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.728 10:46:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.728 10:46:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.728 10:46:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.728 10:46:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.728 10:46:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.728 10:46:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.728 10:46:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.728 10:46:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.728 10:46:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.728 10:46:01 version -- scripts/common.sh@344 -- # case "$op" in 00:05:41.728 10:46:01 version -- scripts/common.sh@345 -- # : 1 00:05:41.728 10:46:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.728 10:46:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.728 10:46:01 version -- scripts/common.sh@365 -- # decimal 1 00:05:41.728 10:46:01 version -- scripts/common.sh@353 -- # local d=1 00:05:41.728 10:46:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.728 10:46:01 version -- scripts/common.sh@355 -- # echo 1 00:05:41.728 10:46:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.728 10:46:01 version -- scripts/common.sh@366 -- # decimal 2 00:05:41.728 10:46:01 version -- scripts/common.sh@353 -- # local d=2 00:05:41.728 10:46:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.728 10:46:01 version -- scripts/common.sh@355 -- # echo 2 00:05:41.728 10:46:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.728 10:46:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.728 10:46:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.728 10:46:01 version -- scripts/common.sh@368 -- # return 0 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.728 --rc genhtml_branch_coverage=1 00:05:41.728 --rc genhtml_function_coverage=1 00:05:41.728 --rc genhtml_legend=1 00:05:41.728 --rc geninfo_all_blocks=1 00:05:41.728 --rc geninfo_unexecuted_blocks=1 00:05:41.728 00:05:41.728 ' 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.728 --rc genhtml_branch_coverage=1 00:05:41.728 --rc genhtml_function_coverage=1 00:05:41.728 --rc genhtml_legend=1 00:05:41.728 --rc geninfo_all_blocks=1 00:05:41.728 --rc geninfo_unexecuted_blocks=1 00:05:41.728 00:05:41.728 ' 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.728 --rc genhtml_branch_coverage=1 00:05:41.728 --rc genhtml_function_coverage=1 00:05:41.728 --rc genhtml_legend=1 00:05:41.728 --rc geninfo_all_blocks=1 00:05:41.728 --rc geninfo_unexecuted_blocks=1 00:05:41.728 00:05:41.728 ' 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.728 --rc genhtml_branch_coverage=1 00:05:41.728 --rc genhtml_function_coverage=1 00:05:41.728 --rc genhtml_legend=1 00:05:41.728 --rc geninfo_all_blocks=1 00:05:41.728 --rc geninfo_unexecuted_blocks=1 00:05:41.728 00:05:41.728 ' 00:05:41.728 10:46:01 version -- app/version.sh@17 -- # get_header_version major 00:05:41.728 10:46:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # cut -f2 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.728 10:46:01 version -- app/version.sh@17 -- # major=25 00:05:41.728 10:46:01 version -- app/version.sh@18 -- # get_header_version minor 00:05:41.728 10:46:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # cut -f2 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.728 10:46:01 version -- app/version.sh@18 -- # minor=1 00:05:41.728 10:46:01 version -- app/version.sh@19 -- # get_header_version patch 00:05:41.728 10:46:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # cut -f2 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.728 10:46:01 version -- app/version.sh@19 -- # patch=0 00:05:41.728 10:46:01 version -- app/version.sh@20 -- # get_header_version suffix 00:05:41.728 10:46:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # cut -f2 00:05:41.728 10:46:01 version -- app/version.sh@14 -- # tr -d '"' 00:05:41.728 10:46:01 version -- app/version.sh@20 -- # suffix=-pre 00:05:41.728 10:46:01 version -- app/version.sh@22 -- # version=25.1 00:05:41.728 10:46:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:41.728 10:46:01 version -- app/version.sh@28 -- # version=25.1rc0 00:05:41.728 10:46:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:41.728 10:46:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:41.728 10:46:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:41.728 10:46:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:41.728 00:05:41.728 real 0m0.267s 00:05:41.728 user 0m0.147s 00:05:41.728 sys 0m0.162s 00:05:41.728 10:46:01 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.728 10:46:01 version -- common/autotest_common.sh@10 -- # set +x 00:05:41.728 ************************************ 00:05:41.728 END TEST version 00:05:41.728 ************************************ 00:05:41.728 10:46:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:41.728 10:46:01 -- spdk/autotest.sh@194 -- # uname -s 00:05:41.728 10:46:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:41.728 10:46:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.728 10:46:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:41.728 10:46:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:41.728 10:46:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.728 10:46:01 -- common/autotest_common.sh@10 -- # set +x 00:05:41.728 10:46:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:41.728 10:46:01 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:41.728 10:46:01 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.728 10:46:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:41.728 10:46:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.728 10:46:01 -- common/autotest_common.sh@10 -- # set +x 00:05:41.989 ************************************ 00:05:41.989 START TEST nvmf_tcp 00:05:41.989 ************************************ 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:41.989 * Looking for test storage... 00:05:41.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.989 10:46:01 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 10:46:01 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.990 --rc geninfo_all_blocks=1 00:05:41.990 --rc geninfo_unexecuted_blocks=1 00:05:41.990 00:05:41.990 ' 00:05:41.990 10:46:01 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.990 --rc genhtml_branch_coverage=1 00:05:41.990 --rc genhtml_function_coverage=1 00:05:41.990 --rc genhtml_legend=1 00:05:41.990 --rc geninfo_all_blocks=1 00:05:41.990 --rc geninfo_unexecuted_blocks=1 00:05:41.990 00:05:41.990 ' 00:05:41.990 10:46:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:41.990 10:46:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:41.990 10:46:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:41.990 10:46:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:41.990 10:46:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.990 10:46:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.990 ************************************ 00:05:41.990 START TEST nvmf_target_core 00:05:41.990 ************************************ 00:05:41.990 10:46:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:42.251 * Looking for test storage... 00:05:42.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.251 --rc genhtml_branch_coverage=1 00:05:42.251 --rc genhtml_function_coverage=1 00:05:42.251 --rc genhtml_legend=1 00:05:42.251 --rc geninfo_all_blocks=1 00:05:42.251 --rc geninfo_unexecuted_blocks=1 00:05:42.251 00:05:42.251 ' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.251 --rc genhtml_branch_coverage=1 00:05:42.251 --rc genhtml_function_coverage=1 00:05:42.251 --rc genhtml_legend=1 00:05:42.251 --rc geninfo_all_blocks=1 00:05:42.251 --rc geninfo_unexecuted_blocks=1 00:05:42.251 00:05:42.251 ' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.251 --rc genhtml_branch_coverage=1 00:05:42.251 --rc genhtml_function_coverage=1 00:05:42.251 --rc genhtml_legend=1 00:05:42.251 --rc geninfo_all_blocks=1 00:05:42.251 --rc geninfo_unexecuted_blocks=1 00:05:42.251 00:05:42.251 ' 00:05:42.251 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.251 --rc genhtml_branch_coverage=1 00:05:42.251 --rc genhtml_function_coverage=1 00:05:42.251 --rc genhtml_legend=1 00:05:42.251 --rc geninfo_all_blocks=1 00:05:42.251 --rc geninfo_unexecuted_blocks=1 00:05:42.251 00:05:42.251 ' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.252 10:46:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:42.513 ************************************ 00:05:42.513 START TEST nvmf_abort 00:05:42.513 ************************************ 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:42.513 * Looking for test storage... 00:05:42.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.513 --rc genhtml_branch_coverage=1 00:05:42.513 --rc genhtml_function_coverage=1 00:05:42.513 --rc genhtml_legend=1 00:05:42.513 --rc geninfo_all_blocks=1 00:05:42.513 --rc geninfo_unexecuted_blocks=1 00:05:42.513 00:05:42.513 ' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.513 --rc genhtml_branch_coverage=1 00:05:42.513 --rc genhtml_function_coverage=1 00:05:42.513 --rc genhtml_legend=1 00:05:42.513 --rc geninfo_all_blocks=1 00:05:42.513 --rc geninfo_unexecuted_blocks=1 00:05:42.513 00:05:42.513 ' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.513 --rc genhtml_branch_coverage=1 00:05:42.513 --rc genhtml_function_coverage=1 00:05:42.513 --rc genhtml_legend=1 00:05:42.513 --rc geninfo_all_blocks=1 00:05:42.513 --rc geninfo_unexecuted_blocks=1 00:05:42.513 00:05:42.513 ' 00:05:42.513 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.513 --rc genhtml_branch_coverage=1 00:05:42.513 --rc genhtml_function_coverage=1 00:05:42.513 --rc genhtml_legend=1 00:05:42.513 --rc geninfo_all_blocks=1 00:05:42.513 --rc geninfo_unexecuted_blocks=1 00:05:42.513 00:05:42.513 ' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:42.514 10:46:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:50.649 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:50.650 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:50.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:50.650 Found net devices under 0000:31:00.0: cvl_0_0 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:50.650 Found net devices under 0000:31:00.1: cvl_0_1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:50.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:50.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:05:50.650 00:05:50.650 --- 10.0.0.2 ping statistics --- 00:05:50.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.650 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:50.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:50.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:05:50.650 00:05:50.650 --- 10.0.0.1 ping statistics --- 00:05:50.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:50.650 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:50.650 10:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1616724 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1616724 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1616724 ']' 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.650 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:50.650 [2024-10-09 10:46:10.098061] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:05:50.650 [2024-10-09 10:46:10.098124] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:50.650 [2024-10-09 10:46:10.236700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.650 [2024-10-09 10:46:10.285103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.650 [2024-10-09 10:46:10.305700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:50.650 [2024-10-09 10:46:10.305735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:50.650 [2024-10-09 10:46:10.305743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:50.650 [2024-10-09 10:46:10.305750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:50.650 [2024-10-09 10:46:10.305756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:50.650 [2024-10-09 10:46:10.307144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.650 [2024-10-09 10:46:10.307300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.651 [2024-10-09 10:46:10.307301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 [2024-10-09 10:46:10.984825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 Malloc0 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 Delay0 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 [2024-10-09 10:46:11.061448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.221 10:46:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:51.481 [2024-10-09 10:46:11.311664] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:54.023 Initializing NVMe Controllers 00:05:54.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:54.023 controller IO queue size 128 less than required 00:05:54.023 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:54.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:54.023 Initialization complete. Launching workers. 00:05:54.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 28818 00:05:54.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28881, failed to submit 62 00:05:54.023 success 28822, unsuccessful 59, failed 0 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:54.023 rmmod nvme_tcp 00:05:54.023 rmmod nvme_fabrics 00:05:54.023 rmmod nvme_keyring 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1616724 ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1616724 ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616724' 00:05:54.023 killing process with pid 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1616724 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:54.023 10:46:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:55.935 00:05:55.935 real 0m13.502s 00:05:55.935 user 0m14.285s 00:05:55.935 sys 0m6.487s 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.935 ************************************ 00:05:55.935 END TEST nvmf_abort 00:05:55.935 ************************************ 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.935 ************************************ 00:05:55.935 START TEST nvmf_ns_hotplug_stress 00:05:55.935 ************************************ 00:05:55.935 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:56.196 * Looking for test storage... 00:05:56.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:56.196 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.196 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.196 10:46:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.196 --rc genhtml_branch_coverage=1 00:05:56.196 --rc genhtml_function_coverage=1 00:05:56.196 --rc genhtml_legend=1 00:05:56.196 --rc geninfo_all_blocks=1 00:05:56.196 --rc geninfo_unexecuted_blocks=1 00:05:56.196 00:05:56.196 ' 00:05:56.196 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.196 --rc genhtml_branch_coverage=1 00:05:56.196 --rc genhtml_function_coverage=1 00:05:56.196 --rc genhtml_legend=1 00:05:56.196 --rc geninfo_all_blocks=1 00:05:56.196 --rc geninfo_unexecuted_blocks=1 00:05:56.197 00:05:56.197 ' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.197 --rc genhtml_branch_coverage=1 00:05:56.197 --rc genhtml_function_coverage=1 00:05:56.197 --rc genhtml_legend=1 00:05:56.197 --rc geninfo_all_blocks=1 00:05:56.197 --rc geninfo_unexecuted_blocks=1 00:05:56.197 00:05:56.197 ' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.197 --rc genhtml_branch_coverage=1 00:05:56.197 --rc genhtml_function_coverage=1 00:05:56.197 --rc genhtml_legend=1 00:05:56.197 --rc geninfo_all_blocks=1 00:05:56.197 --rc geninfo_unexecuted_blocks=1 00:05:56.197 00:05:56.197 ' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:56.197 10:46:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:04.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.446 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:04.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:04.447 Found net devices under 0000:31:00.0: cvl_0_0 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:04.447 Found net devices under 0000:31:00.1: cvl_0_1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:06:04.447 00:06:04.447 --- 10.0.0.2 ping statistics --- 00:06:04.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.447 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:06:04.447 00:06:04.447 --- 10.0.0.1 ping statistics --- 00:06:04.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.447 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1621782 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1621782 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1621782 ']' 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.447 10:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.447 [2024-10-09 10:46:23.717525] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:06:04.447 [2024-10-09 10:46:23.717591] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.447 [2024-10-09 10:46:23.859074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.447 [2024-10-09 10:46:23.907676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.447 [2024-10-09 10:46:23.927074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.447 [2024-10-09 10:46:23.927109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.447 [2024-10-09 10:46:23.927120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.447 [2024-10-09 10:46:23.927128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.447 [2024-10-09 10:46:23.927133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.447 [2024-10-09 10:46:23.928543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.447 [2024-10-09 10:46:23.928684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.447 [2024-10-09 10:46:23.928685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:04.708 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:04.969 [2024-10-09 10:46:24.719670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.969 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.969 10:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.229 [2024-10-09 10:46:25.120396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.229 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:05.489 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:05.749 Malloc0 00:06:05.749 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:05.749 Delay0 00:06:05.749 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.008 10:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:06.268 NULL1 00:06:06.268 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:06.268 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:06.268 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1622220 00:06:06.268 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:06.268 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.528 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.789 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:06.789 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:06.789 true 00:06:07.049 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:07.049 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.049 10:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.309 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:07.309 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:07.570 true 00:06:07.570 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:07.570 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.570 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.830 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:07.830 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:08.090 true 00:06:08.090 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:08.090 10:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.090 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.351 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:08.351 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:08.611 true 00:06:08.611 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:08.611 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.611 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.871 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:08.871 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:09.131 true 00:06:09.131 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:09.131 10:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.390 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.391 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:09.391 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:09.650 true 00:06:09.650 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:09.650 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.920 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.920 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:09.920 10:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:10.180 true 00:06:10.180 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:10.180 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.440 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.440 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:10.440 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:10.700 true 00:06:10.700 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:10.700 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.960 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.960 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:10.960 10:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:11.220 true 00:06:11.220 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:11.220 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.481 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:11.742 true 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:11.742 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.003 10:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.264 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:12.264 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:12.264 true 00:06:12.264 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:12.264 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.524 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.784 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:12.784 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:12.784 true 00:06:12.784 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:12.784 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.044 10:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.305 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:13.305 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:13.305 true 00:06:13.305 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:13.305 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.567 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.828 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:13.828 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:13.828 true 00:06:13.828 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:13.828 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.092 10:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.353 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:14.353 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:14.353 true 00:06:14.612 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:14.612 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.612 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.873 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:14.873 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:15.133 true 00:06:15.133 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:15.133 10:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.133 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.394 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:15.394 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:15.654 true 00:06:15.654 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:15.654 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.654 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.915 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:15.915 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:16.175 true 00:06:16.175 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:16.175 10:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.175 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.436 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:16.436 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:16.697 true 00:06:16.697 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:16.697 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.958 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.958 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:16.958 10:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:17.218 true 00:06:17.218 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:17.218 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.478 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.478 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:17.478 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:17.736 true 00:06:17.736 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:17.736 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.995 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.995 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:17.995 10:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:18.255 true 00:06:18.255 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:18.255 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.515 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.515 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:18.515 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:18.776 true 00:06:18.776 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:18.776 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.035 10:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.294 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:19.294 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:19.294 true 00:06:19.294 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:19.294 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.554 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.814 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:19.814 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:19.814 true 00:06:19.814 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:19.814 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.075 10:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.338 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:20.338 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:20.338 true 00:06:20.599 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:20.599 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.599 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.860 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:20.860 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:21.121 true 00:06:21.121 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:21.121 10:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.121 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.381 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:21.381 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:21.642 true 00:06:21.642 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:21.642 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.642 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.903 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:21.903 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:22.163 true 00:06:22.163 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:22.163 10:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.163 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.423 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:22.423 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:22.684 true 00:06:22.684 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:22.684 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.684 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.945 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:22.945 10:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:23.205 true 00:06:23.205 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:23.205 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.466 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.466 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:23.466 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:23.727 true 00:06:23.727 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:23.727 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.987 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.987 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:23.987 10:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:24.248 true 00:06:24.248 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:24.248 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.508 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.508 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:24.508 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:24.769 true 00:06:24.769 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:24.769 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.030 10:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.030 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:25.030 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:25.290 true 00:06:25.290 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:25.291 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.552 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.811 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:25.811 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:25.811 true 00:06:25.811 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:25.811 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.071 10:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.330 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:26.330 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:26.330 true 00:06:26.330 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:26.330 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.590 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.851 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:26.851 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:26.851 true 00:06:27.112 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:27.112 10:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.112 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.374 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:27.374 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:27.634 true 00:06:27.634 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:27.634 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.634 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.894 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:27.894 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:28.155 true 00:06:28.155 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:28.155 10:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.155 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.414 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:28.414 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:28.675 true 00:06:28.675 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:28.675 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.935 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.935 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:28.935 10:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:29.195 true 00:06:29.195 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:29.195 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.456 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.456 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:29.456 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:29.716 true 00:06:29.717 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:29.717 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.977 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.977 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:29.977 10:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:30.305 true 00:06:30.305 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:30.305 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.305 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.585 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:30.585 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:30.845 true 00:06:30.845 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:30.845 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.845 10:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.104 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:31.104 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:31.364 true 00:06:31.364 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:31.364 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.624 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.624 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:31.624 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:31.885 true 00:06:31.885 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:31.885 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.145 10:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.145 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:32.145 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:32.406 true 00:06:32.406 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:32.406 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.666 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.666 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:32.666 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:32.927 true 00:06:32.927 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:32.927 10:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.187 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.448 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:33.448 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:33.448 true 00:06:33.448 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:33.448 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.708 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.968 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:33.968 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:33.968 true 00:06:33.968 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:33.968 10:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.229 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.489 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:34.489 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:34.489 true 00:06:34.489 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:34.489 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.749 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.010 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:35.010 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:35.010 true 00:06:35.010 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:35.010 10:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.271 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.532 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:35.532 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:35.532 true 00:06:35.532 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:35.532 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.794 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.055 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:36.055 10:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:36.055 true 00:06:36.055 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:36.055 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.316 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.576 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:36.576 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:36.576 true 00:06:36.577 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:36.577 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.838 Initializing NVMe Controllers 00:06:36.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.838 Controller IO queue size 128, less than required. 00:06:36.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:36.838 Initialization complete. Launching workers. 00:06:36.838 ======================================================== 00:06:36.838 Latency(us) 00:06:36.838 Device Information : IOPS MiB/s Average min max 00:06:36.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30435.88 14.86 4205.34 1437.18 8467.86 00:06:36.838 ======================================================== 00:06:36.838 Total : 30435.88 14.86 4205.34 1437.18 8467.86 00:06:36.838 00:06:36.838 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.099 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:06:37.099 10:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:06:37.099 true 00:06:37.099 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1622220 00:06:37.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1622220) - No such process 00:06:37.099 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1622220 00:06:37.099 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.360 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:37.621 null0 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.621 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:37.882 null1 00:06:37.882 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:37.882 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:37.882 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:37.882 null2 00:06:38.143 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.143 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.143 10:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:38.143 null3 00:06:38.143 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.143 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.143 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:38.404 null4 00:06:38.404 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.404 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.404 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:38.404 null5 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:38.665 null6 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.665 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:38.927 null7 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1628979 1628984 1628987 1628990 1628992 1628995 1628999 1629002 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.927 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.189 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.189 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.189 10:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.189 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.450 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.711 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.712 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.973 10:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.234 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.496 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.757 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.758 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.019 10:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.281 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.542 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.543 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.543 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.543 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.543 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.543 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.804 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.066 10:47:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.066 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.066 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.066 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.066 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.066 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.327 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.588 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.849 rmmod nvme_tcp 00:06:42.849 rmmod nvme_fabrics 00:06:42.849 rmmod nvme_keyring 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1621782 ']' 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1621782 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1621782 ']' 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1621782 00:06:42.849 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621782 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621782' 00:06:42.850 killing process with pid 1621782 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1621782 00:06:42.850 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1621782 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.141 10:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.085 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.085 00:06:45.085 real 0m49.145s 00:06:45.085 user 3m20.104s 00:06:45.086 sys 0m16.996s 00:06:45.086 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.086 10:47:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 ************************************ 00:06:45.086 END TEST nvmf_ns_hotplug_stress 00:06:45.086 ************************************ 00:06:45.086 10:47:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.086 10:47:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:45.086 10:47:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.086 10:47:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 ************************************ 00:06:45.086 START TEST nvmf_delete_subsystem 00:06:45.086 ************************************ 00:06:45.086 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.347 * Looking for test storage... 00:06:45.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.347 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.347 --rc genhtml_branch_coverage=1 00:06:45.347 --rc genhtml_function_coverage=1 00:06:45.347 --rc genhtml_legend=1 00:06:45.347 --rc geninfo_all_blocks=1 00:06:45.347 --rc geninfo_unexecuted_blocks=1 00:06:45.347 00:06:45.347 ' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.348 --rc genhtml_branch_coverage=1 00:06:45.348 --rc genhtml_function_coverage=1 00:06:45.348 --rc genhtml_legend=1 00:06:45.348 --rc geninfo_all_blocks=1 00:06:45.348 --rc geninfo_unexecuted_blocks=1 00:06:45.348 00:06:45.348 ' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.348 --rc genhtml_branch_coverage=1 00:06:45.348 --rc genhtml_function_coverage=1 00:06:45.348 --rc genhtml_legend=1 00:06:45.348 --rc geninfo_all_blocks=1 00:06:45.348 --rc geninfo_unexecuted_blocks=1 00:06:45.348 00:06:45.348 ' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.348 10:47:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:53.492 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.492 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:53.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:53.493 Found net devices under 0000:31:00.0: cvl_0_0 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:53.493 Found net devices under 0000:31:00.1: cvl_0_1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:53.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:06:53.493 00:06:53.493 --- 10.0.0.2 ping statistics --- 00:06:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.493 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:06:53.493 00:06:53.493 --- 10.0.0.1 ping statistics --- 00:06:53.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.493 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1634342 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1634342 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1634342 ']' 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.493 10:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.493 [2024-10-09 10:47:12.955344] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:06:53.493 [2024-10-09 10:47:12.955394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.493 [2024-10-09 10:47:13.091847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.493 [2024-10-09 10:47:13.123339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.493 [2024-10-09 10:47:13.140229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.493 [2024-10-09 10:47:13.140260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.493 [2024-10-09 10:47:13.140268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.493 [2024-10-09 10:47:13.140275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.493 [2024-10-09 10:47:13.140281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.493 [2024-10-09 10:47:13.141455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.493 [2024-10-09 10:47:13.141456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.754 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.754 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:53.754 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:53.754 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.754 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 [2024-10-09 10:47:13.782350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 [2024-10-09 10:47:13.798497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 NULL1 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 Delay0 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1634462 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:54.015 10:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:54.015 [2024-10-09 10:47:13.983120] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:55.931 10:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:55.931 10:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.931 10:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 [2024-10-09 10:47:16.061176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaed30 is same with the state(6) to be set 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 Read completed with error (sct=0, sc=8) 00:06:56.192 starting I/O failed: -6 00:06:56.192 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 starting I/O failed: -6 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 starting I/O failed: -6 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 starting I/O failed: -6 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 starting I/O failed: -6 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 [2024-10-09 10:47:16.067370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa98000c00 is same with the state(6) to be set 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Write completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:56.193 Read completed with error (sct=0, sc=8) 00:06:57.134 [2024-10-09 10:47:17.039367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e20 is same with the state(6) to be set 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 [2024-10-09 10:47:17.062157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaef10 is same with the state(6) to be set 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.134 Read completed with error (sct=0, sc=8) 00:06:57.134 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 [2024-10-09 10:47:17.062457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf2d0 is same with the state(6) to be set 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 [2024-10-09 10:47:17.066683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa9800d640 is same with the state(6) to be set 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Write completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 Read completed with error (sct=0, sc=8) 00:06:57.135 [2024-10-09 10:47:17.067269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faa9800cfe0 is same with the state(6) to be set 00:06:57.135 Initializing NVMe Controllers 00:06:57.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:57.135 Controller IO queue size 128, less than required. 00:06:57.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:57.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:57.135 Initialization complete. Launching workers. 00:06:57.135 ======================================================== 00:06:57.135 Latency(us) 00:06:57.135 Device Information : IOPS MiB/s Average min max 00:06:57.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.92 0.08 896556.84 214.70 1005562.48 00:06:57.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.44 0.08 947071.69 305.60 2002155.90 00:06:57.135 ======================================================== 00:06:57.135 Total : 333.36 0.16 921474.48 214.70 2002155.90 00:06:57.135 00:06:57.135 [2024-10-09 10:47:17.067692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb3e20 (9): Bad file descriptor 00:06:57.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:57.135 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.135 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:57.135 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1634462 00:06:57.135 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1634462 00:06:57.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1634462) - No such process 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1634462 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1634462 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1634462 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.706 [2024-10-09 10:47:17.602185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1635351 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:06:57.706 10:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:57.967 [2024-10-09 10:47:17.775011] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:58.227 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.227 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:06:58.227 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:58.798 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:58.798 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:06:58.798 10:47:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.370 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.370 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:06:59.370 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.941 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.941 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:06:59.941 10:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.202 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.202 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:07:00.202 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.774 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.774 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:07:00.775 10:47:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.036 Initializing NVMe Controllers 00:07:01.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:01.036 Controller IO queue size 128, less than required. 00:07:01.036 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:01.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:01.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:01.036 Initialization complete. Launching workers. 00:07:01.036 ======================================================== 00:07:01.036 Latency(us) 00:07:01.036 Device Information : IOPS MiB/s Average min max 00:07:01.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001856.85 1000037.10 1041277.63 00:07:01.036 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002653.14 999995.50 1008844.36 00:07:01.036 ======================================================== 00:07:01.036 Total : 256.00 0.12 1002255.00 999995.50 1041277.63 00:07:01.036 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1635351 00:07:01.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1635351) - No such process 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1635351 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:01.297 rmmod nvme_tcp 00:07:01.297 rmmod nvme_fabrics 00:07:01.297 rmmod nvme_keyring 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1634342 ']' 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1634342 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1634342 ']' 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1634342 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.297 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1634342 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1634342' 00:07:01.557 killing process with pid 1634342 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1634342 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1634342 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.557 10:47:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:04.113 00:07:04.113 real 0m18.419s 00:07:04.113 user 0m30.492s 00:07:04.113 sys 0m6.935s 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 ************************************ 00:07:04.113 END TEST nvmf_delete_subsystem 00:07:04.113 ************************************ 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.113 ************************************ 00:07:04.113 START TEST nvmf_host_management 00:07:04.113 ************************************ 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:04.113 * Looking for test storage... 00:07:04.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.113 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.114 --rc genhtml_branch_coverage=1 00:07:04.114 --rc genhtml_function_coverage=1 00:07:04.114 --rc genhtml_legend=1 00:07:04.114 --rc geninfo_all_blocks=1 00:07:04.114 --rc geninfo_unexecuted_blocks=1 00:07:04.114 00:07:04.114 ' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.114 --rc genhtml_branch_coverage=1 00:07:04.114 --rc genhtml_function_coverage=1 00:07:04.114 --rc genhtml_legend=1 00:07:04.114 --rc geninfo_all_blocks=1 00:07:04.114 --rc geninfo_unexecuted_blocks=1 00:07:04.114 00:07:04.114 ' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.114 --rc genhtml_branch_coverage=1 00:07:04.114 --rc genhtml_function_coverage=1 00:07:04.114 --rc genhtml_legend=1 00:07:04.114 --rc geninfo_all_blocks=1 00:07:04.114 --rc geninfo_unexecuted_blocks=1 00:07:04.114 00:07:04.114 ' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.114 --rc genhtml_branch_coverage=1 00:07:04.114 --rc genhtml_function_coverage=1 00:07:04.114 --rc genhtml_legend=1 00:07:04.114 --rc geninfo_all_blocks=1 00:07:04.114 --rc geninfo_unexecuted_blocks=1 00:07:04.114 00:07:04.114 ' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:04.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:04.114 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:04.115 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:04.115 10:47:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.363 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:12.364 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:12.364 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:12.364 Found net devices under 0000:31:00.0: cvl_0_0 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:12.364 Found net devices under 0000:31:00.1: cvl_0_1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:12.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:07:12.364 00:07:12.364 --- 10.0.0.2 ping statistics --- 00:07:12.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.364 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:07:12.364 00:07:12.364 --- 10.0.0.1 ping statistics --- 00:07:12.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.364 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1640458 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1640458 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1640458 ']' 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.364 10:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.364 [2024-10-09 10:47:31.503955] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:12.364 [2024-10-09 10:47:31.504024] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.364 [2024-10-09 10:47:31.644914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.364 [2024-10-09 10:47:31.692374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.365 [2024-10-09 10:47:31.712220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.365 [2024-10-09 10:47:31.712252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.365 [2024-10-09 10:47:31.712260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.365 [2024-10-09 10:47:31.712266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.365 [2024-10-09 10:47:31.712272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.365 [2024-10-09 10:47:31.714087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.365 [2024-10-09 10:47:31.714244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.365 [2024-10-09 10:47:31.714391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.365 [2024-10-09 10:47:31.714391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.365 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.365 [2024-10-09 10:47:32.356512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.625 Malloc0 00:07:12.625 [2024-10-09 10:47:32.427683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1640524 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1640524 /var/tmp/bdevperf.sock 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1640524 ']' 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:12.625 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:12.625 { 00:07:12.625 "params": { 00:07:12.625 "name": "Nvme$subsystem", 00:07:12.625 "trtype": "$TEST_TRANSPORT", 00:07:12.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:12.625 "adrfam": "ipv4", 00:07:12.625 "trsvcid": "$NVMF_PORT", 00:07:12.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:12.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:12.625 "hdgst": ${hdgst:-false}, 00:07:12.625 "ddgst": ${ddgst:-false} 00:07:12.626 }, 00:07:12.626 "method": "bdev_nvme_attach_controller" 00:07:12.626 } 00:07:12.626 EOF 00:07:12.626 )") 00:07:12.626 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:12.626 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:12.626 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:12.626 10:47:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:12.626 "params": { 00:07:12.626 "name": "Nvme0", 00:07:12.626 "trtype": "tcp", 00:07:12.626 "traddr": "10.0.0.2", 00:07:12.626 "adrfam": "ipv4", 00:07:12.626 "trsvcid": "4420", 00:07:12.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:12.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:12.626 "hdgst": false, 00:07:12.626 "ddgst": false 00:07:12.626 }, 00:07:12.626 "method": "bdev_nvme_attach_controller" 00:07:12.626 }' 00:07:12.626 [2024-10-09 10:47:32.530825] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:12.626 [2024-10-09 10:47:32.530878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640524 ] 00:07:12.887 [2024-10-09 10:47:32.661654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.887 [2024-10-09 10:47:32.693558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.887 [2024-10-09 10:47:32.711863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.149 Running I/O for 10 seconds... 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=591 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 591 -ge 100 ']' 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.411 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.411 [2024-10-09 10:47:33.400560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.411 [2024-10-09 10:47:33.400596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.411 [2024-10-09 10:47:33.400604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.411 [2024-10-09 10:47:33.400611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.411 [2024-10-09 10:47:33.400618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.411 [2024-10-09 10:47:33.400624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.400930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2448240 is same with the state(6) to be set 00:07:13.412 [2024-10-09 10:47:33.403080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.412 [2024-10-09 10:47:33.403313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.412 [2024-10-09 10:47:33.403322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.413 [2024-10-09 10:47:33.403907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.413 [2024-10-09 10:47:33.403914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.403924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.403931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.403940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.403947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.403957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.403964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.403975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.403982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.403992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.403999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.414 [2024-10-09 10:47:33.404202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:13.414 [2024-10-09 10:47:33.404260] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfb09a0 was disconnected and freed. reset controller. 00:07:13.414 [2024-10-09 10:47:33.405481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:13.414 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.414 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:13.414 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.414 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.414 task offset: 83712 on job bdev=Nvme0n1 fails 00:07:13.414 00:07:13.414 Latency(us) 00:07:13.414 [2024-10-09T08:47:33.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.414 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:13.414 Job: Nvme0n1 ended in about 0.43 seconds with error 00:07:13.414 Verification LBA range: start 0x0 length 0x400 00:07:13.414 Nvme0n1 : 0.43 1515.66 94.73 148.32 0.00 37301.76 1566.96 34815.32 00:07:13.414 [2024-10-09T08:47:33.416Z] =================================================================================================================== 00:07:13.414 [2024-10-09T08:47:33.416Z] Total : 1515.66 94.73 148.32 0.00 37301.76 1566.96 34815.32 00:07:13.414 [2024-10-09 10:47:33.407515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.414 [2024-10-09 10:47:33.407540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd97c40 (9): Bad file descriptor 00:07:13.675 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.675 10:47:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:13.675 [2024-10-09 10:47:33.551691] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1640524 00:07:14.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1640524) - No such process 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:14.617 { 00:07:14.617 "params": { 00:07:14.617 "name": "Nvme$subsystem", 00:07:14.617 "trtype": "$TEST_TRANSPORT", 00:07:14.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:14.617 "adrfam": "ipv4", 00:07:14.617 "trsvcid": "$NVMF_PORT", 00:07:14.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:14.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:14.617 "hdgst": ${hdgst:-false}, 00:07:14.617 "ddgst": ${ddgst:-false} 00:07:14.617 }, 00:07:14.617 "method": "bdev_nvme_attach_controller" 00:07:14.617 } 00:07:14.617 EOF 00:07:14.617 )") 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:14.617 10:47:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:14.617 "params": { 00:07:14.617 "name": "Nvme0", 00:07:14.617 "trtype": "tcp", 00:07:14.617 "traddr": "10.0.0.2", 00:07:14.617 "adrfam": "ipv4", 00:07:14.617 "trsvcid": "4420", 00:07:14.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.617 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:14.617 "hdgst": false, 00:07:14.617 "ddgst": false 00:07:14.617 }, 00:07:14.617 "method": "bdev_nvme_attach_controller" 00:07:14.617 }' 00:07:14.617 [2024-10-09 10:47:34.485796] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:14.617 [2024-10-09 10:47:34.485861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640992 ] 00:07:14.617 [2024-10-09 10:47:34.617547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.877 [2024-10-09 10:47:34.649178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.877 [2024-10-09 10:47:34.666363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.877 Running I/O for 1 seconds... 00:07:16.258 1545.00 IOPS, 96.56 MiB/s 00:07:16.258 Latency(us) 00:07:16.258 [2024-10-09T08:47:36.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.258 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.258 Verification LBA range: start 0x0 length 0x400 00:07:16.258 Nvme0n1 : 1.04 1600.00 100.00 0.00 0.00 39287.41 6377.33 33720.49 00:07:16.258 [2024-10-09T08:47:36.260Z] =================================================================================================================== 00:07:16.258 [2024-10-09T08:47:36.260Z] Total : 1600.00 100.00 0.00 0.00 39287.41 6377.33 33720.49 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.258 10:47:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.258 rmmod nvme_tcp 00:07:16.258 rmmod nvme_fabrics 00:07:16.258 rmmod nvme_keyring 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1640458 ']' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1640458 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1640458 ']' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1640458 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640458 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640458' 00:07:16.258 killing process with pid 1640458 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1640458 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1640458 00:07:16.258 [2024-10-09 10:47:36.188774] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.258 10:47:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:18.803 00:07:18.803 real 0m14.716s 00:07:18.803 user 0m22.620s 00:07:18.803 sys 0m6.738s 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.803 ************************************ 00:07:18.803 END TEST nvmf_host_management 00:07:18.803 ************************************ 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.803 ************************************ 00:07:18.803 START TEST nvmf_lvol 00:07:18.803 ************************************ 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:18.803 * Looking for test storage... 00:07:18.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.803 --rc genhtml_branch_coverage=1 00:07:18.803 --rc genhtml_function_coverage=1 00:07:18.803 --rc genhtml_legend=1 00:07:18.803 --rc geninfo_all_blocks=1 00:07:18.803 --rc geninfo_unexecuted_blocks=1 00:07:18.803 00:07:18.803 ' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.803 --rc genhtml_branch_coverage=1 00:07:18.803 --rc genhtml_function_coverage=1 00:07:18.803 --rc genhtml_legend=1 00:07:18.803 --rc geninfo_all_blocks=1 00:07:18.803 --rc geninfo_unexecuted_blocks=1 00:07:18.803 00:07:18.803 ' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.803 --rc genhtml_branch_coverage=1 00:07:18.803 --rc genhtml_function_coverage=1 00:07:18.803 --rc genhtml_legend=1 00:07:18.803 --rc geninfo_all_blocks=1 00:07:18.803 --rc geninfo_unexecuted_blocks=1 00:07:18.803 00:07:18.803 ' 00:07:18.803 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.803 --rc genhtml_branch_coverage=1 00:07:18.803 --rc genhtml_function_coverage=1 00:07:18.803 --rc genhtml_legend=1 00:07:18.803 --rc geninfo_all_blocks=1 00:07:18.803 --rc geninfo_unexecuted_blocks=1 00:07:18.804 00:07:18.804 ' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.804 10:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.077 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.077 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.077 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.077 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.077 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:27.078 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:27.078 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:27.078 Found net devices under 0000:31:00.0: cvl_0_0 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:27.078 Found net devices under 0000:31:00.1: cvl_0_1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:27.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:07:27.078 00:07:27.078 --- 10.0.0.2 ping statistics --- 00:07:27.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.078 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:07:27.078 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:27.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:07:27.079 00:07:27.079 --- 10.0.0.1 ping statistics --- 00:07:27.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.079 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1645620 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1645620 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1645620 ']' 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.079 10:47:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.079 [2024-10-09 10:47:46.013591] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:27.079 [2024-10-09 10:47:46.013641] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.079 [2024-10-09 10:47:46.150479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.079 [2024-10-09 10:47:46.181871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.079 [2024-10-09 10:47:46.198924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.079 [2024-10-09 10:47:46.198953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.079 [2024-10-09 10:47:46.198961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.079 [2024-10-09 10:47:46.198968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.079 [2024-10-09 10:47:46.198974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.079 [2024-10-09 10:47:46.200289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.079 [2024-10-09 10:47:46.200402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.079 [2024-10-09 10:47:46.200404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.079 10:47:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:27.079 [2024-10-09 10:47:47.005687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.079 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:27.339 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:27.339 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:27.599 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:27.599 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:27.860 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:27.860 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=541ed5c7-ff86-4030-8081-544303fa6144 00:07:27.860 10:47:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 541ed5c7-ff86-4030-8081-544303fa6144 lvol 20 00:07:28.121 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=138f9b48-61bc-45f3-93c1-e701e2f6f1a0 00:07:28.121 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.381 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 138f9b48-61bc-45f3-93c1-e701e2f6f1a0 00:07:28.382 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.641 [2024-10-09 10:47:48.481709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.641 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.901 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1646321 00:07:28.901 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:28.901 10:47:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:29.839 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 138f9b48-61bc-45f3-93c1-e701e2f6f1a0 MY_SNAPSHOT 00:07:30.099 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9defecbb-feba-4c3a-b62e-74b355cc5b51 00:07:30.099 10:47:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 138f9b48-61bc-45f3-93c1-e701e2f6f1a0 30 00:07:30.359 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9defecbb-feba-4c3a-b62e-74b355cc5b51 MY_CLONE 00:07:30.619 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7f37b042-626a-4925-a5e7-5fc937b21cc5 00:07:30.619 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7f37b042-626a-4925-a5e7-5fc937b21cc5 00:07:30.880 10:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1646321 00:07:40.882 Initializing NVMe Controllers 00:07:40.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:40.882 Controller IO queue size 128, less than required. 00:07:40.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:40.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:40.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:40.882 Initialization complete. Launching workers. 00:07:40.882 ======================================================== 00:07:40.882 Latency(us) 00:07:40.882 Device Information : IOPS MiB/s Average min max 00:07:40.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12184.20 47.59 10509.07 1610.73 64183.90 00:07:40.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17374.90 67.87 7367.18 458.43 55394.82 00:07:40.882 ======================================================== 00:07:40.882 Total : 29559.10 115.47 8662.26 458.43 64183.90 00:07:40.882 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 138f9b48-61bc-45f3-93c1-e701e2f6f1a0 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 541ed5c7-ff86-4030-8081-544303fa6144 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.882 rmmod nvme_tcp 00:07:40.882 rmmod nvme_fabrics 00:07:40.882 rmmod nvme_keyring 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1645620 ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1645620 ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645620' 00:07:40.882 killing process with pid 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1645620 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.882 10:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.296 00:07:42.296 real 0m23.660s 00:07:42.296 user 1m4.272s 00:07:42.296 sys 0m8.221s 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 ************************************ 00:07:42.296 END TEST nvmf_lvol 00:07:42.296 ************************************ 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.296 ************************************ 00:07:42.296 START TEST nvmf_lvs_grow 00:07:42.296 ************************************ 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:42.296 * Looking for test storage... 00:07:42.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.296 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.559 --rc genhtml_branch_coverage=1 00:07:42.559 --rc genhtml_function_coverage=1 00:07:42.559 --rc genhtml_legend=1 00:07:42.559 --rc geninfo_all_blocks=1 00:07:42.559 --rc geninfo_unexecuted_blocks=1 00:07:42.559 00:07:42.559 ' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.559 --rc genhtml_branch_coverage=1 00:07:42.559 --rc genhtml_function_coverage=1 00:07:42.559 --rc genhtml_legend=1 00:07:42.559 --rc geninfo_all_blocks=1 00:07:42.559 --rc geninfo_unexecuted_blocks=1 00:07:42.559 00:07:42.559 ' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.559 --rc genhtml_branch_coverage=1 00:07:42.559 --rc genhtml_function_coverage=1 00:07:42.559 --rc genhtml_legend=1 00:07:42.559 --rc geninfo_all_blocks=1 00:07:42.559 --rc geninfo_unexecuted_blocks=1 00:07:42.559 00:07:42.559 ' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.559 --rc genhtml_branch_coverage=1 00:07:42.559 --rc genhtml_function_coverage=1 00:07:42.559 --rc genhtml_legend=1 00:07:42.559 --rc geninfo_all_blocks=1 00:07:42.559 --rc geninfo_unexecuted_blocks=1 00:07:42.559 00:07:42.559 ' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.559 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.560 10:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:50.698 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:50.698 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:50.698 Found net devices under 0000:31:00.0: cvl_0_0 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:50.698 Found net devices under 0000:31:00.1: cvl_0_1 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.698 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:07:50.699 00:07:50.699 --- 10.0.0.2 ping statistics --- 00:07:50.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.699 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:07:50.699 00:07:50.699 --- 10.0.0.1 ping statistics --- 00:07:50.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.699 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1653194 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1653194 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1653194 ']' 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.699 10:48:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 [2024-10-09 10:48:09.681019] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:50.699 [2024-10-09 10:48:09.681072] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.699 [2024-10-09 10:48:09.819554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.699 [2024-10-09 10:48:09.852272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.699 [2024-10-09 10:48:09.874155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.699 [2024-10-09 10:48:09.874196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.699 [2024-10-09 10:48:09.874204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.699 [2024-10-09 10:48:09.874211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.699 [2024-10-09 10:48:09.874217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.699 [2024-10-09 10:48:09.874825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:50.699 [2024-10-09 10:48:10.646897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.699 ************************************ 00:07:50.699 START TEST lvs_grow_clean 00:07:50.699 ************************************ 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:50.699 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.960 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.960 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.960 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:50.960 10:48:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:51.219 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=987a5ee2-62d6-4215-84c5-3554e19932af 00:07:51.219 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:07:51.219 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 987a5ee2-62d6-4215-84c5-3554e19932af lvol 150 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea3eb369-3c4d-4e3c-a975-3a1197742e1b 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:51.479 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:51.740 [2024-10-09 10:48:11.558090] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:51.740 [2024-10-09 10:48:11.558139] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:51.740 true 00:07:51.740 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:07:51.740 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:52.000 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:52.000 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.000 10:48:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea3eb369-3c4d-4e3c-a975-3a1197742e1b 00:07:52.260 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:52.260 [2024-10-09 10:48:12.246569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.260 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1653989 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1653989 /var/tmp/bdevperf.sock 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1653989 ']' 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.520 10:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.520 [2024-10-09 10:48:12.460809] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:07:52.520 [2024-10-09 10:48:12.460860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653989 ] 00:07:52.781 [2024-10-09 10:48:12.590948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.781 [2024-10-09 10:48:12.638168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.781 [2024-10-09 10:48:12.656200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.350 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.350 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:53.350 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:53.610 Nvme0n1 00:07:53.610 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:53.871 [ 00:07:53.871 { 00:07:53.871 "name": "Nvme0n1", 00:07:53.871 "aliases": [ 00:07:53.871 "ea3eb369-3c4d-4e3c-a975-3a1197742e1b" 00:07:53.871 ], 00:07:53.871 "product_name": "NVMe disk", 00:07:53.871 "block_size": 4096, 00:07:53.871 "num_blocks": 38912, 00:07:53.871 "uuid": "ea3eb369-3c4d-4e3c-a975-3a1197742e1b", 00:07:53.871 "numa_id": 0, 00:07:53.871 "assigned_rate_limits": { 00:07:53.871 "rw_ios_per_sec": 0, 00:07:53.871 "rw_mbytes_per_sec": 0, 00:07:53.871 "r_mbytes_per_sec": 0, 00:07:53.871 "w_mbytes_per_sec": 0 00:07:53.871 }, 00:07:53.871 "claimed": false, 00:07:53.871 "zoned": false, 00:07:53.871 "supported_io_types": { 00:07:53.871 "read": true, 00:07:53.871 "write": true, 00:07:53.871 "unmap": true, 00:07:53.871 "flush": true, 00:07:53.871 "reset": true, 00:07:53.871 "nvme_admin": true, 00:07:53.871 "nvme_io": true, 00:07:53.871 "nvme_io_md": false, 00:07:53.871 "write_zeroes": true, 00:07:53.871 "zcopy": false, 00:07:53.871 "get_zone_info": false, 00:07:53.871 "zone_management": false, 00:07:53.871 "zone_append": false, 00:07:53.871 "compare": true, 00:07:53.871 "compare_and_write": true, 00:07:53.871 "abort": true, 00:07:53.871 "seek_hole": false, 00:07:53.871 "seek_data": false, 00:07:53.871 "copy": true, 00:07:53.871 "nvme_iov_md": false 00:07:53.871 }, 00:07:53.871 "memory_domains": [ 00:07:53.871 { 00:07:53.871 "dma_device_id": "system", 00:07:53.871 "dma_device_type": 1 00:07:53.871 } 00:07:53.871 ], 00:07:53.871 "driver_specific": { 00:07:53.871 "nvme": [ 00:07:53.871 { 00:07:53.871 "trid": { 00:07:53.871 "trtype": "TCP", 00:07:53.871 "adrfam": "IPv4", 00:07:53.871 "traddr": "10.0.0.2", 00:07:53.871 "trsvcid": "4420", 00:07:53.871 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:53.871 }, 00:07:53.871 "ctrlr_data": { 00:07:53.871 "cntlid": 1, 00:07:53.871 "vendor_id": "0x8086", 00:07:53.871 "model_number": "SPDK bdev Controller", 00:07:53.871 "serial_number": "SPDK0", 00:07:53.871 "firmware_revision": "25.01", 00:07:53.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.871 "oacs": { 00:07:53.871 "security": 0, 00:07:53.871 "format": 0, 00:07:53.871 "firmware": 0, 00:07:53.871 "ns_manage": 0 00:07:53.871 }, 00:07:53.871 "multi_ctrlr": true, 00:07:53.871 "ana_reporting": false 00:07:53.871 }, 00:07:53.871 "vs": { 00:07:53.871 "nvme_version": "1.3" 00:07:53.871 }, 00:07:53.871 "ns_data": { 00:07:53.871 "id": 1, 00:07:53.871 "can_share": true 00:07:53.871 } 00:07:53.871 } 00:07:53.871 ], 00:07:53.871 "mp_policy": "active_passive" 00:07:53.871 } 00:07:53.871 } 00:07:53.871 ] 00:07:53.871 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1654130 00:07:53.871 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:53.871 10:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.871 Running I/O for 10 seconds... 00:07:54.821 Latency(us) 00:07:54.821 [2024-10-09T08:48:14.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.821 Nvme0n1 : 1.00 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:07:54.821 [2024-10-09T08:48:14.823Z] =================================================================================================================== 00:07:54.821 [2024-10-09T08:48:14.823Z] Total : 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:07:54.821 00:07:55.761 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:07:56.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.020 Nvme0n1 : 2.00 17955.50 70.14 0.00 0.00 0.00 0.00 0.00 00:07:56.020 [2024-10-09T08:48:16.022Z] =================================================================================================================== 00:07:56.020 [2024-10-09T08:48:16.022Z] Total : 17955.50 70.14 0.00 0.00 0.00 0.00 0.00 00:07:56.020 00:07:56.020 true 00:07:56.020 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:07:56.020 10:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:56.279 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:56.279 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:56.279 10:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1654130 00:07:56.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.848 Nvme0n1 : 3.00 17998.67 70.31 0.00 0.00 0.00 0.00 0.00 00:07:56.848 [2024-10-09T08:48:16.850Z] =================================================================================================================== 00:07:56.848 [2024-10-09T08:48:16.851Z] Total : 17998.67 70.31 0.00 0.00 0.00 0.00 0.00 00:07:56.849 00:07:58.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.229 Nvme0n1 : 4.00 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:07:58.229 [2024-10-09T08:48:18.231Z] =================================================================================================================== 00:07:58.229 [2024-10-09T08:48:18.231Z] Total : 18036.00 70.45 0.00 0.00 0.00 0.00 0.00 00:07:58.229 00:07:58.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.798 Nvme0n1 : 5.00 18057.80 70.54 0.00 0.00 0.00 0.00 0.00 00:07:58.798 [2024-10-09T08:48:18.800Z] =================================================================================================================== 00:07:58.798 [2024-10-09T08:48:18.800Z] Total : 18057.80 70.54 0.00 0.00 0.00 0.00 0.00 00:07:58.798 00:08:00.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.180 Nvme0n1 : 6.00 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:08:00.180 [2024-10-09T08:48:20.182Z] =================================================================================================================== 00:08:00.180 [2024-10-09T08:48:20.182Z] Total : 18070.00 70.59 0.00 0.00 0.00 0.00 0.00 00:08:00.180 00:08:01.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.119 Nvme0n1 : 7.00 18080.57 70.63 0.00 0.00 0.00 0.00 0.00 00:08:01.119 [2024-10-09T08:48:21.121Z] =================================================================================================================== 00:08:01.119 [2024-10-09T08:48:21.121Z] Total : 18080.57 70.63 0.00 0.00 0.00 0.00 0.00 00:08:01.119 00:08:02.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.057 Nvme0n1 : 8.00 18096.25 70.69 0.00 0.00 0.00 0.00 0.00 00:08:02.057 [2024-10-09T08:48:22.059Z] =================================================================================================================== 00:08:02.057 [2024-10-09T08:48:22.059Z] Total : 18096.25 70.69 0.00 0.00 0.00 0.00 0.00 00:08:02.057 00:08:02.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.996 Nvme0n1 : 9.00 18101.78 70.71 0.00 0.00 0.00 0.00 0.00 00:08:02.996 [2024-10-09T08:48:22.998Z] =================================================================================================================== 00:08:02.996 [2024-10-09T08:48:22.998Z] Total : 18101.78 70.71 0.00 0.00 0.00 0.00 0.00 00:08:02.996 00:08:03.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.938 Nvme0n1 : 10.00 18105.10 70.72 0.00 0.00 0.00 0.00 0.00 00:08:03.938 [2024-10-09T08:48:23.940Z] =================================================================================================================== 00:08:03.938 [2024-10-09T08:48:23.940Z] Total : 18105.10 70.72 0.00 0.00 0.00 0.00 0.00 00:08:03.938 00:08:03.938 00:08:03.938 Latency(us) 00:08:03.938 [2024-10-09T08:48:23.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.938 Nvme0n1 : 10.00 18111.67 70.75 0.00 0.00 7064.35 4242.43 13958.97 00:08:03.938 [2024-10-09T08:48:23.940Z] =================================================================================================================== 00:08:03.938 [2024-10-09T08:48:23.940Z] Total : 18111.67 70.75 0.00 0.00 7064.35 4242.43 13958.97 00:08:03.938 { 00:08:03.938 "results": [ 00:08:03.938 { 00:08:03.938 "job": "Nvme0n1", 00:08:03.938 "core_mask": "0x2", 00:08:03.938 "workload": "randwrite", 00:08:03.938 "status": "finished", 00:08:03.938 "queue_depth": 128, 00:08:03.938 "io_size": 4096, 00:08:03.938 "runtime": 10.003442, 00:08:03.938 "iops": 18111.665964574993, 00:08:03.938 "mibps": 70.74869517412107, 00:08:03.938 "io_failed": 0, 00:08:03.938 "io_timeout": 0, 00:08:03.938 "avg_latency_us": 7064.352936128181, 00:08:03.938 "min_latency_us": 4242.43234213164, 00:08:03.938 "max_latency_us": 13958.970932175074 00:08:03.938 } 00:08:03.938 ], 00:08:03.938 "core_count": 1 00:08:03.938 } 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1653989 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1653989 ']' 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1653989 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1653989 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1653989' 00:08:03.938 killing process with pid 1653989 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1653989 00:08:03.938 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.938 00:08:03.938 Latency(us) 00:08:03.938 [2024-10-09T08:48:23.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.938 [2024-10-09T08:48:23.940Z] =================================================================================================================== 00:08:03.938 [2024-10-09T08:48:23.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.938 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1653989 00:08:04.199 10:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.199 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.459 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:04.459 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:04.719 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:04.719 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:04.720 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.720 [2024-10-09 10:48:24.721215] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:04.980 request: 00:08:04.980 { 00:08:04.980 "uuid": "987a5ee2-62d6-4215-84c5-3554e19932af", 00:08:04.980 "method": "bdev_lvol_get_lvstores", 00:08:04.980 "req_id": 1 00:08:04.980 } 00:08:04.980 Got JSON-RPC error response 00:08:04.980 response: 00:08:04.980 { 00:08:04.980 "code": -19, 00:08:04.980 "message": "No such device" 00:08:04.980 } 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.980 10:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.241 aio_bdev 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ea3eb369-3c4d-4e3c-a975-3a1197742e1b 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ea3eb369-3c4d-4e3c-a975-3a1197742e1b 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.241 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.501 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ea3eb369-3c4d-4e3c-a975-3a1197742e1b -t 2000 00:08:05.502 [ 00:08:05.502 { 00:08:05.502 "name": "ea3eb369-3c4d-4e3c-a975-3a1197742e1b", 00:08:05.502 "aliases": [ 00:08:05.502 "lvs/lvol" 00:08:05.502 ], 00:08:05.502 "product_name": "Logical Volume", 00:08:05.502 "block_size": 4096, 00:08:05.502 "num_blocks": 38912, 00:08:05.502 "uuid": "ea3eb369-3c4d-4e3c-a975-3a1197742e1b", 00:08:05.502 "assigned_rate_limits": { 00:08:05.502 "rw_ios_per_sec": 0, 00:08:05.502 "rw_mbytes_per_sec": 0, 00:08:05.502 "r_mbytes_per_sec": 0, 00:08:05.502 "w_mbytes_per_sec": 0 00:08:05.502 }, 00:08:05.502 "claimed": false, 00:08:05.502 "zoned": false, 00:08:05.502 "supported_io_types": { 00:08:05.502 "read": true, 00:08:05.502 "write": true, 00:08:05.502 "unmap": true, 00:08:05.502 "flush": false, 00:08:05.502 "reset": true, 00:08:05.502 "nvme_admin": false, 00:08:05.502 "nvme_io": false, 00:08:05.502 "nvme_io_md": false, 00:08:05.502 "write_zeroes": true, 00:08:05.502 "zcopy": false, 00:08:05.502 "get_zone_info": false, 00:08:05.502 "zone_management": false, 00:08:05.502 "zone_append": false, 00:08:05.502 "compare": false, 00:08:05.502 "compare_and_write": false, 00:08:05.502 "abort": false, 00:08:05.502 "seek_hole": true, 00:08:05.502 "seek_data": true, 00:08:05.502 "copy": false, 00:08:05.502 "nvme_iov_md": false 00:08:05.502 }, 00:08:05.502 "driver_specific": { 00:08:05.502 "lvol": { 00:08:05.502 "lvol_store_uuid": "987a5ee2-62d6-4215-84c5-3554e19932af", 00:08:05.502 "base_bdev": "aio_bdev", 00:08:05.502 "thin_provision": false, 00:08:05.502 "num_allocated_clusters": 38, 00:08:05.502 "snapshot": false, 00:08:05.502 "clone": false, 00:08:05.502 "esnap_clone": false 00:08:05.502 } 00:08:05.502 } 00:08:05.502 } 00:08:05.502 ] 00:08:05.502 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:05.502 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:05.502 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:05.763 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:05.763 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:05.763 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.024 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.024 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ea3eb369-3c4d-4e3c-a975-3a1197742e1b 00:08:06.024 10:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 987a5ee2-62d6-4215-84c5-3554e19932af 00:08:06.284 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.284 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.544 00:08:06.544 real 0m15.614s 00:08:06.544 user 0m15.256s 00:08:06.544 sys 0m1.333s 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:06.544 ************************************ 00:08:06.544 END TEST lvs_grow_clean 00:08:06.544 ************************************ 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.544 ************************************ 00:08:06.544 START TEST lvs_grow_dirty 00:08:06.544 ************************************ 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.544 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.804 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:06.804 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:06.804 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:06.804 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:06.804 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.065 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.065 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.065 10:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 lvol 150 00:08:07.326 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cdb4a30f-069c-413f-9756-8905e952edd5 00:08:07.326 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.326 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:07.326 [2024-10-09 10:48:27.259551] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:07.326 [2024-10-09 10:48:27.259603] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:07.326 true 00:08:07.326 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:07.326 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:07.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:07.605 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:07.865 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cdb4a30f-069c-413f-9756-8905e952edd5 00:08:07.865 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:08.125 [2024-10-09 10:48:27.916029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.125 10:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1657115 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1657115 /var/tmp/bdevperf.sock 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1657115 ']' 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.125 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.386 [2024-10-09 10:48:28.149514] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:08.386 [2024-10-09 10:48:28.149564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657115 ] 00:08:08.386 [2024-10-09 10:48:28.279712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.386 [2024-10-09 10:48:28.327452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.386 [2024-10-09 10:48:28.343880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.955 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.955 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:08.955 10:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:09.216 Nvme0n1 00:08:09.476 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:09.476 [ 00:08:09.476 { 00:08:09.476 "name": "Nvme0n1", 00:08:09.476 "aliases": [ 00:08:09.476 "cdb4a30f-069c-413f-9756-8905e952edd5" 00:08:09.476 ], 00:08:09.476 "product_name": "NVMe disk", 00:08:09.476 "block_size": 4096, 00:08:09.476 "num_blocks": 38912, 00:08:09.476 "uuid": "cdb4a30f-069c-413f-9756-8905e952edd5", 00:08:09.476 "numa_id": 0, 00:08:09.476 "assigned_rate_limits": { 00:08:09.476 "rw_ios_per_sec": 0, 00:08:09.476 "rw_mbytes_per_sec": 0, 00:08:09.476 "r_mbytes_per_sec": 0, 00:08:09.476 "w_mbytes_per_sec": 0 00:08:09.476 }, 00:08:09.476 "claimed": false, 00:08:09.476 "zoned": false, 00:08:09.476 "supported_io_types": { 00:08:09.476 "read": true, 00:08:09.476 "write": true, 00:08:09.476 "unmap": true, 00:08:09.476 "flush": true, 00:08:09.476 "reset": true, 00:08:09.476 "nvme_admin": true, 00:08:09.476 "nvme_io": true, 00:08:09.476 "nvme_io_md": false, 00:08:09.476 "write_zeroes": true, 00:08:09.476 "zcopy": false, 00:08:09.476 "get_zone_info": false, 00:08:09.476 "zone_management": false, 00:08:09.476 "zone_append": false, 00:08:09.476 "compare": true, 00:08:09.476 "compare_and_write": true, 00:08:09.476 "abort": true, 00:08:09.476 "seek_hole": false, 00:08:09.476 "seek_data": false, 00:08:09.476 "copy": true, 00:08:09.476 "nvme_iov_md": false 00:08:09.476 }, 00:08:09.476 "memory_domains": [ 00:08:09.476 { 00:08:09.476 "dma_device_id": "system", 00:08:09.476 "dma_device_type": 1 00:08:09.476 } 00:08:09.476 ], 00:08:09.476 "driver_specific": { 00:08:09.476 "nvme": [ 00:08:09.477 { 00:08:09.477 "trid": { 00:08:09.477 "trtype": "TCP", 00:08:09.477 "adrfam": "IPv4", 00:08:09.477 "traddr": "10.0.0.2", 00:08:09.477 "trsvcid": "4420", 00:08:09.477 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:09.477 }, 00:08:09.477 "ctrlr_data": { 00:08:09.477 "cntlid": 1, 00:08:09.477 "vendor_id": "0x8086", 00:08:09.477 "model_number": "SPDK bdev Controller", 00:08:09.477 "serial_number": "SPDK0", 00:08:09.477 "firmware_revision": "25.01", 00:08:09.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.477 "oacs": { 00:08:09.477 "security": 0, 00:08:09.477 "format": 0, 00:08:09.477 "firmware": 0, 00:08:09.477 "ns_manage": 0 00:08:09.477 }, 00:08:09.477 "multi_ctrlr": true, 00:08:09.477 "ana_reporting": false 00:08:09.477 }, 00:08:09.477 "vs": { 00:08:09.477 "nvme_version": "1.3" 00:08:09.477 }, 00:08:09.477 "ns_data": { 00:08:09.477 "id": 1, 00:08:09.477 "can_share": true 00:08:09.477 } 00:08:09.477 } 00:08:09.477 ], 00:08:09.477 "mp_policy": "active_passive" 00:08:09.477 } 00:08:09.477 } 00:08:09.477 ] 00:08:09.477 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1657427 00:08:09.477 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:09.477 10:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:09.477 Running I/O for 10 seconds... 00:08:10.860 Latency(us) 00:08:10.860 [2024-10-09T08:48:30.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.860 Nvme0n1 : 1.00 17821.00 69.61 0.00 0.00 0.00 0.00 0.00 00:08:10.860 [2024-10-09T08:48:30.862Z] =================================================================================================================== 00:08:10.860 [2024-10-09T08:48:30.862Z] Total : 17821.00 69.61 0.00 0.00 0.00 0.00 0.00 00:08:10.860 00:08:11.430 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:11.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.691 Nvme0n1 : 2.00 17944.50 70.10 0.00 0.00 0.00 0.00 0.00 00:08:11.691 [2024-10-09T08:48:31.693Z] =================================================================================================================== 00:08:11.691 [2024-10-09T08:48:31.693Z] Total : 17944.50 70.10 0.00 0.00 0.00 0.00 0.00 00:08:11.691 00:08:11.691 true 00:08:11.691 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:11.691 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:11.951 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:11.951 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:11.951 10:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1657427 00:08:12.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.522 Nvme0n1 : 3.00 17984.33 70.25 0.00 0.00 0.00 0.00 0.00 00:08:12.522 [2024-10-09T08:48:32.524Z] =================================================================================================================== 00:08:12.522 [2024-10-09T08:48:32.524Z] Total : 17984.33 70.25 0.00 0.00 0.00 0.00 0.00 00:08:12.522 00:08:13.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.903 Nvme0n1 : 4.00 18024.50 70.41 0.00 0.00 0.00 0.00 0.00 00:08:13.903 [2024-10-09T08:48:33.905Z] =================================================================================================================== 00:08:13.903 [2024-10-09T08:48:33.905Z] Total : 18024.50 70.41 0.00 0.00 0.00 0.00 0.00 00:08:13.903 00:08:14.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.474 Nvme0n1 : 5.00 18042.80 70.48 0.00 0.00 0.00 0.00 0.00 00:08:14.474 [2024-10-09T08:48:34.476Z] =================================================================================================================== 00:08:14.474 [2024-10-09T08:48:34.476Z] Total : 18042.80 70.48 0.00 0.00 0.00 0.00 0.00 00:08:14.474 00:08:15.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.858 Nvme0n1 : 6.00 18065.83 70.57 0.00 0.00 0.00 0.00 0.00 00:08:15.858 [2024-10-09T08:48:35.860Z] =================================================================================================================== 00:08:15.858 [2024-10-09T08:48:35.860Z] Total : 18065.83 70.57 0.00 0.00 0.00 0.00 0.00 00:08:15.858 00:08:16.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.800 Nvme0n1 : 7.00 18090.57 70.67 0.00 0.00 0.00 0.00 0.00 00:08:16.800 [2024-10-09T08:48:36.802Z] =================================================================================================================== 00:08:16.800 [2024-10-09T08:48:36.802Z] Total : 18090.57 70.67 0.00 0.00 0.00 0.00 0.00 00:08:16.800 00:08:17.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.744 Nvme0n1 : 8.00 18095.25 70.68 0.00 0.00 0.00 0.00 0.00 00:08:17.744 [2024-10-09T08:48:37.746Z] =================================================================================================================== 00:08:17.744 [2024-10-09T08:48:37.746Z] Total : 18095.25 70.68 0.00 0.00 0.00 0.00 0.00 00:08:17.744 00:08:18.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.687 Nvme0n1 : 9.00 18108.11 70.73 0.00 0.00 0.00 0.00 0.00 00:08:18.687 [2024-10-09T08:48:38.689Z] =================================================================================================================== 00:08:18.687 [2024-10-09T08:48:38.689Z] Total : 18108.11 70.73 0.00 0.00 0.00 0.00 0.00 00:08:18.687 00:08:19.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.627 Nvme0n1 : 10.00 18122.40 70.79 0.00 0.00 0.00 0.00 0.00 00:08:19.627 [2024-10-09T08:48:39.629Z] =================================================================================================================== 00:08:19.627 [2024-10-09T08:48:39.629Z] Total : 18122.40 70.79 0.00 0.00 0.00 0.00 0.00 00:08:19.627 00:08:19.627 00:08:19.627 Latency(us) 00:08:19.627 [2024-10-09T08:48:39.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.627 Nvme0n1 : 10.01 18122.42 70.79 0.00 0.00 7059.65 2025.42 12754.67 00:08:19.627 [2024-10-09T08:48:39.629Z] =================================================================================================================== 00:08:19.627 [2024-10-09T08:48:39.629Z] Total : 18122.42 70.79 0.00 0.00 7059.65 2025.42 12754.67 00:08:19.627 { 00:08:19.627 "results": [ 00:08:19.627 { 00:08:19.627 "job": "Nvme0n1", 00:08:19.627 "core_mask": "0x2", 00:08:19.627 "workload": "randwrite", 00:08:19.627 "status": "finished", 00:08:19.627 "queue_depth": 128, 00:08:19.627 "io_size": 4096, 00:08:19.627 "runtime": 10.007052, 00:08:19.627 "iops": 18122.420069367083, 00:08:19.627 "mibps": 70.79070339596517, 00:08:19.627 "io_failed": 0, 00:08:19.627 "io_timeout": 0, 00:08:19.627 "avg_latency_us": 7059.646695142048, 00:08:19.627 "min_latency_us": 2025.419311727364, 00:08:19.627 "max_latency_us": 12754.66755763448 00:08:19.627 } 00:08:19.627 ], 00:08:19.627 "core_count": 1 00:08:19.627 } 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1657115 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1657115 ']' 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1657115 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657115 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657115' 00:08:19.627 killing process with pid 1657115 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1657115 00:08:19.627 Received shutdown signal, test time was about 10.000000 seconds 00:08:19.627 00:08:19.627 Latency(us) 00:08:19.627 [2024-10-09T08:48:39.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.627 [2024-10-09T08:48:39.629Z] =================================================================================================================== 00:08:19.627 [2024-10-09T08:48:39.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:19.627 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1657115 00:08:19.887 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.887 10:48:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.147 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:20.147 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1653194 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1653194 00:08:20.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1653194 Killed "${NVMF_APP[@]}" "$@" 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1659488 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1659488 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1659488 ']' 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.408 10:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:20.408 [2024-10-09 10:48:40.328386] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:20.408 [2024-10-09 10:48:40.328444] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.668 [2024-10-09 10:48:40.465844] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.668 [2024-10-09 10:48:40.497502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.668 [2024-10-09 10:48:40.518827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.668 [2024-10-09 10:48:40.518866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.668 [2024-10-09 10:48:40.518874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.668 [2024-10-09 10:48:40.518881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.668 [2024-10-09 10:48:40.518887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.668 [2024-10-09 10:48:40.519591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.238 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.239 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:21.500 [2024-10-09 10:48:41.317287] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:21.500 [2024-10-09 10:48:41.317422] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:21.500 [2024-10-09 10:48:41.317453] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cdb4a30f-069c-413f-9756-8905e952edd5 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cdb4a30f-069c-413f-9756-8905e952edd5 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.500 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:21.760 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cdb4a30f-069c-413f-9756-8905e952edd5 -t 2000 00:08:21.760 [ 00:08:21.760 { 00:08:21.760 "name": "cdb4a30f-069c-413f-9756-8905e952edd5", 00:08:21.760 "aliases": [ 00:08:21.760 "lvs/lvol" 00:08:21.760 ], 00:08:21.760 "product_name": "Logical Volume", 00:08:21.760 "block_size": 4096, 00:08:21.760 "num_blocks": 38912, 00:08:21.760 "uuid": "cdb4a30f-069c-413f-9756-8905e952edd5", 00:08:21.760 "assigned_rate_limits": { 00:08:21.760 "rw_ios_per_sec": 0, 00:08:21.760 "rw_mbytes_per_sec": 0, 00:08:21.760 "r_mbytes_per_sec": 0, 00:08:21.760 "w_mbytes_per_sec": 0 00:08:21.760 }, 00:08:21.760 "claimed": false, 00:08:21.760 "zoned": false, 00:08:21.760 "supported_io_types": { 00:08:21.760 "read": true, 00:08:21.760 "write": true, 00:08:21.760 "unmap": true, 00:08:21.760 "flush": false, 00:08:21.760 "reset": true, 00:08:21.760 "nvme_admin": false, 00:08:21.760 "nvme_io": false, 00:08:21.760 "nvme_io_md": false, 00:08:21.760 "write_zeroes": true, 00:08:21.760 "zcopy": false, 00:08:21.760 "get_zone_info": false, 00:08:21.760 "zone_management": false, 00:08:21.760 "zone_append": false, 00:08:21.760 "compare": false, 00:08:21.760 "compare_and_write": false, 00:08:21.760 "abort": false, 00:08:21.760 "seek_hole": true, 00:08:21.760 "seek_data": true, 00:08:21.760 "copy": false, 00:08:21.760 "nvme_iov_md": false 00:08:21.760 }, 00:08:21.760 "driver_specific": { 00:08:21.760 "lvol": { 00:08:21.760 "lvol_store_uuid": "5b2406c2-4397-4076-a1b1-0d34bd9e0766", 00:08:21.760 "base_bdev": "aio_bdev", 00:08:21.760 "thin_provision": false, 00:08:21.760 "num_allocated_clusters": 38, 00:08:21.760 "snapshot": false, 00:08:21.760 "clone": false, 00:08:21.760 "esnap_clone": false 00:08:21.760 } 00:08:21.760 } 00:08:21.760 } 00:08:21.760 ] 00:08:21.760 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:21.760 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:21.760 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:22.020 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:22.020 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:22.020 10:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:22.020 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:22.020 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.280 [2024-10-09 10:48:42.163499] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:22.280 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:22.540 request: 00:08:22.540 { 00:08:22.540 "uuid": "5b2406c2-4397-4076-a1b1-0d34bd9e0766", 00:08:22.540 "method": "bdev_lvol_get_lvstores", 00:08:22.540 "req_id": 1 00:08:22.540 } 00:08:22.540 Got JSON-RPC error response 00:08:22.540 response: 00:08:22.540 { 00:08:22.540 "code": -19, 00:08:22.540 "message": "No such device" 00:08:22.540 } 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.540 aio_bdev 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cdb4a30f-069c-413f-9756-8905e952edd5 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=cdb4a30f-069c-413f-9756-8905e952edd5 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.540 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.801 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cdb4a30f-069c-413f-9756-8905e952edd5 -t 2000 00:08:23.062 [ 00:08:23.062 { 00:08:23.062 "name": "cdb4a30f-069c-413f-9756-8905e952edd5", 00:08:23.062 "aliases": [ 00:08:23.062 "lvs/lvol" 00:08:23.062 ], 00:08:23.062 "product_name": "Logical Volume", 00:08:23.062 "block_size": 4096, 00:08:23.062 "num_blocks": 38912, 00:08:23.062 "uuid": "cdb4a30f-069c-413f-9756-8905e952edd5", 00:08:23.062 "assigned_rate_limits": { 00:08:23.062 "rw_ios_per_sec": 0, 00:08:23.062 "rw_mbytes_per_sec": 0, 00:08:23.062 "r_mbytes_per_sec": 0, 00:08:23.062 "w_mbytes_per_sec": 0 00:08:23.062 }, 00:08:23.062 "claimed": false, 00:08:23.062 "zoned": false, 00:08:23.062 "supported_io_types": { 00:08:23.062 "read": true, 00:08:23.062 "write": true, 00:08:23.062 "unmap": true, 00:08:23.062 "flush": false, 00:08:23.062 "reset": true, 00:08:23.062 "nvme_admin": false, 00:08:23.062 "nvme_io": false, 00:08:23.062 "nvme_io_md": false, 00:08:23.062 "write_zeroes": true, 00:08:23.062 "zcopy": false, 00:08:23.062 "get_zone_info": false, 00:08:23.062 "zone_management": false, 00:08:23.062 "zone_append": false, 00:08:23.062 "compare": false, 00:08:23.062 "compare_and_write": false, 00:08:23.062 "abort": false, 00:08:23.062 "seek_hole": true, 00:08:23.062 "seek_data": true, 00:08:23.062 "copy": false, 00:08:23.062 "nvme_iov_md": false 00:08:23.062 }, 00:08:23.062 "driver_specific": { 00:08:23.062 "lvol": { 00:08:23.062 "lvol_store_uuid": "5b2406c2-4397-4076-a1b1-0d34bd9e0766", 00:08:23.062 "base_bdev": "aio_bdev", 00:08:23.062 "thin_provision": false, 00:08:23.062 "num_allocated_clusters": 38, 00:08:23.062 "snapshot": false, 00:08:23.062 "clone": false, 00:08:23.062 "esnap_clone": false 00:08:23.062 } 00:08:23.062 } 00:08:23.062 } 00:08:23.062 ] 00:08:23.062 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:23.062 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:23.062 10:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:23.062 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:23.062 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:23.062 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:23.323 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:23.323 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cdb4a30f-069c-413f-9756-8905e952edd5 00:08:23.583 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b2406c2-4397-4076-a1b1-0d34bd9e0766 00:08:23.583 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.843 00:08:23.843 real 0m17.326s 00:08:23.843 user 0m45.119s 00:08:23.843 sys 0m2.991s 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:23.843 ************************************ 00:08:23.843 END TEST lvs_grow_dirty 00:08:23.843 ************************************ 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:23.843 nvmf_trace.0 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.843 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.843 rmmod nvme_tcp 00:08:24.104 rmmod nvme_fabrics 00:08:24.104 rmmod nvme_keyring 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1659488 ']' 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1659488 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1659488 ']' 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1659488 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659488 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659488' 00:08:24.104 killing process with pid 1659488 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1659488 00:08:24.104 10:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1659488 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.104 10:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:26.651 00:08:26.651 real 0m44.031s 00:08:26.651 user 1m6.471s 00:08:26.651 sys 0m10.240s 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.651 ************************************ 00:08:26.651 END TEST nvmf_lvs_grow 00:08:26.651 ************************************ 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.651 ************************************ 00:08:26.651 START TEST nvmf_bdev_io_wait 00:08:26.651 ************************************ 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:26.651 * Looking for test storage... 00:08:26.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.651 --rc genhtml_branch_coverage=1 00:08:26.651 --rc genhtml_function_coverage=1 00:08:26.651 --rc genhtml_legend=1 00:08:26.651 --rc geninfo_all_blocks=1 00:08:26.651 --rc geninfo_unexecuted_blocks=1 00:08:26.651 00:08:26.651 ' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.651 --rc genhtml_branch_coverage=1 00:08:26.651 --rc genhtml_function_coverage=1 00:08:26.651 --rc genhtml_legend=1 00:08:26.651 --rc geninfo_all_blocks=1 00:08:26.651 --rc geninfo_unexecuted_blocks=1 00:08:26.651 00:08:26.651 ' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.651 --rc genhtml_branch_coverage=1 00:08:26.651 --rc genhtml_function_coverage=1 00:08:26.651 --rc genhtml_legend=1 00:08:26.651 --rc geninfo_all_blocks=1 00:08:26.651 --rc geninfo_unexecuted_blocks=1 00:08:26.651 00:08:26.651 ' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.651 --rc genhtml_branch_coverage=1 00:08:26.651 --rc genhtml_function_coverage=1 00:08:26.651 --rc genhtml_legend=1 00:08:26.651 --rc geninfo_all_blocks=1 00:08:26.651 --rc geninfo_unexecuted_blocks=1 00:08:26.651 00:08:26.651 ' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.651 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:26.652 10:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:34.797 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:34.797 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.797 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:34.798 Found net devices under 0000:31:00.0: cvl_0_0 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:34.798 Found net devices under 0000:31:00.1: cvl_0_1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:08:34.798 00:08:34.798 --- 10.0.0.2 ping statistics --- 00:08:34.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.798 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:08:34.798 00:08:34.798 --- 10.0.0.1 ping statistics --- 00:08:34.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.798 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1664629 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1664629 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1664629 ']' 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.798 10:48:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.798 [2024-10-09 10:48:53.993535] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:34.798 [2024-10-09 10:48:53.993614] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.798 [2024-10-09 10:48:54.133556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:34.798 [2024-10-09 10:48:54.164265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.798 [2024-10-09 10:48:54.183312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.798 [2024-10-09 10:48:54.183341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.798 [2024-10-09 10:48:54.183351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.798 [2024-10-09 10:48:54.183358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.798 [2024-10-09 10:48:54.183363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.798 [2024-10-09 10:48:54.184836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.798 [2024-10-09 10:48:54.184951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.798 [2024-10-09 10:48:54.185104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.798 [2024-10-09 10:48:54.185105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.059 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.059 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:35.059 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 [2024-10-09 10:48:54.903568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 Malloc0 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.060 [2024-10-09 10:48:54.962680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1664837 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1664840 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:35.060 { 00:08:35.060 "params": { 00:08:35.060 "name": "Nvme$subsystem", 00:08:35.060 "trtype": "$TEST_TRANSPORT", 00:08:35.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.060 "adrfam": "ipv4", 00:08:35.060 "trsvcid": "$NVMF_PORT", 00:08:35.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.060 "hdgst": ${hdgst:-false}, 00:08:35.060 "ddgst": ${ddgst:-false} 00:08:35.060 }, 00:08:35.060 "method": "bdev_nvme_attach_controller" 00:08:35.060 } 00:08:35.060 EOF 00:08:35.060 )") 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1664843 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1664847 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:35.060 { 00:08:35.060 "params": { 00:08:35.060 "name": "Nvme$subsystem", 00:08:35.060 "trtype": "$TEST_TRANSPORT", 00:08:35.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.060 "adrfam": "ipv4", 00:08:35.060 "trsvcid": "$NVMF_PORT", 00:08:35.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.060 "hdgst": ${hdgst:-false}, 00:08:35.060 "ddgst": ${ddgst:-false} 00:08:35.060 }, 00:08:35.060 "method": "bdev_nvme_attach_controller" 00:08:35.060 } 00:08:35.060 EOF 00:08:35.060 )") 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:35.060 { 00:08:35.060 "params": { 00:08:35.060 "name": "Nvme$subsystem", 00:08:35.060 "trtype": "$TEST_TRANSPORT", 00:08:35.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.060 "adrfam": "ipv4", 00:08:35.060 "trsvcid": "$NVMF_PORT", 00:08:35.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.060 "hdgst": ${hdgst:-false}, 00:08:35.060 "ddgst": ${ddgst:-false} 00:08:35.060 }, 00:08:35.060 "method": "bdev_nvme_attach_controller" 00:08:35.060 } 00:08:35.060 EOF 00:08:35.060 )") 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:35.060 { 00:08:35.060 "params": { 00:08:35.060 "name": "Nvme$subsystem", 00:08:35.060 "trtype": "$TEST_TRANSPORT", 00:08:35.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.060 "adrfam": "ipv4", 00:08:35.060 "trsvcid": "$NVMF_PORT", 00:08:35.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.060 "hdgst": ${hdgst:-false}, 00:08:35.060 "ddgst": ${ddgst:-false} 00:08:35.060 }, 00:08:35.060 "method": "bdev_nvme_attach_controller" 00:08:35.060 } 00:08:35.060 EOF 00:08:35.060 )") 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1664837 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:35.060 "params": { 00:08:35.060 "name": "Nvme1", 00:08:35.060 "trtype": "tcp", 00:08:35.060 "traddr": "10.0.0.2", 00:08:35.060 "adrfam": "ipv4", 00:08:35.060 "trsvcid": "4420", 00:08:35.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.060 "hdgst": false, 00:08:35.060 "ddgst": false 00:08:35.060 }, 00:08:35.060 "method": "bdev_nvme_attach_controller" 00:08:35.060 }' 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:35.060 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:35.060 "params": { 00:08:35.061 "name": "Nvme1", 00:08:35.061 "trtype": "tcp", 00:08:35.061 "traddr": "10.0.0.2", 00:08:35.061 "adrfam": "ipv4", 00:08:35.061 "trsvcid": "4420", 00:08:35.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.061 "hdgst": false, 00:08:35.061 "ddgst": false 00:08:35.061 }, 00:08:35.061 "method": "bdev_nvme_attach_controller" 00:08:35.061 }' 00:08:35.061 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:35.061 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:35.061 "params": { 00:08:35.061 "name": "Nvme1", 00:08:35.061 "trtype": "tcp", 00:08:35.061 "traddr": "10.0.0.2", 00:08:35.061 "adrfam": "ipv4", 00:08:35.061 "trsvcid": "4420", 00:08:35.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.061 "hdgst": false, 00:08:35.061 "ddgst": false 00:08:35.061 }, 00:08:35.061 "method": "bdev_nvme_attach_controller" 00:08:35.061 }' 00:08:35.061 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:35.061 10:48:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:35.061 "params": { 00:08:35.061 "name": "Nvme1", 00:08:35.061 "trtype": "tcp", 00:08:35.061 "traddr": "10.0.0.2", 00:08:35.061 "adrfam": "ipv4", 00:08:35.061 "trsvcid": "4420", 00:08:35.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.061 "hdgst": false, 00:08:35.061 "ddgst": false 00:08:35.061 }, 00:08:35.061 "method": "bdev_nvme_attach_controller" 00:08:35.061 }' 00:08:35.061 [2024-10-09 10:48:55.015099] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:35.061 [2024-10-09 10:48:55.015151] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:35.061 [2024-10-09 10:48:55.018918] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:35.061 [2024-10-09 10:48:55.018962] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:35.061 [2024-10-09 10:48:55.020085] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:35.061 [2024-10-09 10:48:55.020132] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:35.061 [2024-10-09 10:48:55.021356] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:35.061 [2024-10-09 10:48:55.021402] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:35.321 [2024-10-09 10:48:55.212840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.321 [2024-10-09 10:48:55.263696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.321 [2024-10-09 10:48:55.267038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.321 [2024-10-09 10:48:55.274470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:35.321 [2024-10-09 10:48:55.314651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.321 [2024-10-09 10:48:55.317216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.582 [2024-10-09 10:48:55.328636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:35.582 [2024-10-09 10:48:55.362110] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.582 [2024-10-09 10:48:55.363528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.582 [2024-10-09 10:48:55.373973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:35.582 [2024-10-09 10:48:55.411336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.582 [2024-10-09 10:48:55.421749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:35.582 Running I/O for 1 seconds... 00:08:35.582 Running I/O for 1 seconds... 00:08:35.582 Running I/O for 1 seconds... 00:08:35.842 Running I/O for 1 seconds... 00:08:36.783 20202.00 IOPS, 78.91 MiB/s 00:08:36.783 Latency(us) 00:08:36.783 [2024-10-09T08:48:56.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.783 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:36.783 Nvme1n1 : 1.01 20261.27 79.15 0.00 0.00 6301.28 3092.87 15655.94 00:08:36.783 [2024-10-09T08:48:56.785Z] =================================================================================================================== 00:08:36.783 [2024-10-09T08:48:56.785Z] Total : 20261.27 79.15 0.00 0.00 6301.28 3092.87 15655.94 00:08:36.783 8611.00 IOPS, 33.64 MiB/s 00:08:36.783 Latency(us) 00:08:36.783 [2024-10-09T08:48:56.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.783 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:36.783 Nvme1n1 : 1.01 8625.18 33.69 0.00 0.00 14744.77 5227.77 22115.39 00:08:36.783 [2024-10-09T08:48:56.785Z] =================================================================================================================== 00:08:36.783 [2024-10-09T08:48:56.785Z] Total : 8625.18 33.69 0.00 0.00 14744.77 5227.77 22115.39 00:08:36.783 187368.00 IOPS, 731.91 MiB/s 00:08:36.783 Latency(us) 00:08:36.783 [2024-10-09T08:48:56.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.783 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:36.783 Nvme1n1 : 1.00 186992.48 730.44 0.00 0.00 680.90 306.21 1984.36 00:08:36.783 [2024-10-09T08:48:56.785Z] =================================================================================================================== 00:08:36.783 [2024-10-09T08:48:56.785Z] Total : 186992.48 730.44 0.00 0.00 680.90 306.21 1984.36 00:08:36.783 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1664840 00:08:36.783 8818.00 IOPS, 34.45 MiB/s 00:08:36.783 Latency(us) 00:08:36.783 [2024-10-09T08:48:56.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.783 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:36.783 Nvme1n1 : 1.01 8898.79 34.76 0.00 0.00 14345.65 3777.13 36348.07 00:08:36.783 [2024-10-09T08:48:56.785Z] =================================================================================================================== 00:08:36.783 [2024-10-09T08:48:56.785Z] Total : 8898.79 34.76 0.00 0.00 14345.65 3777.13 36348.07 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1664843 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1664847 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.044 rmmod nvme_tcp 00:08:37.044 rmmod nvme_fabrics 00:08:37.044 rmmod nvme_keyring 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1664629 ']' 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1664629 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1664629 ']' 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1664629 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1664629 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1664629' 00:08:37.044 killing process with pid 1664629 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1664629 00:08:37.044 10:48:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1664629 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.305 10:48:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.219 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.220 00:08:39.220 real 0m12.939s 00:08:39.220 user 0m18.851s 00:08:39.220 sys 0m7.011s 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.220 ************************************ 00:08:39.220 END TEST nvmf_bdev_io_wait 00:08:39.220 ************************************ 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.220 10:48:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.480 ************************************ 00:08:39.480 START TEST nvmf_queue_depth 00:08:39.480 ************************************ 00:08:39.480 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.480 * Looking for test storage... 00:08:39.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.480 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.480 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.480 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.481 --rc genhtml_branch_coverage=1 00:08:39.481 --rc genhtml_function_coverage=1 00:08:39.481 --rc genhtml_legend=1 00:08:39.481 --rc geninfo_all_blocks=1 00:08:39.481 --rc geninfo_unexecuted_blocks=1 00:08:39.481 00:08:39.481 ' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.481 --rc genhtml_branch_coverage=1 00:08:39.481 --rc genhtml_function_coverage=1 00:08:39.481 --rc genhtml_legend=1 00:08:39.481 --rc geninfo_all_blocks=1 00:08:39.481 --rc geninfo_unexecuted_blocks=1 00:08:39.481 00:08:39.481 ' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.481 --rc genhtml_branch_coverage=1 00:08:39.481 --rc genhtml_function_coverage=1 00:08:39.481 --rc genhtml_legend=1 00:08:39.481 --rc geninfo_all_blocks=1 00:08:39.481 --rc geninfo_unexecuted_blocks=1 00:08:39.481 00:08:39.481 ' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.481 --rc genhtml_branch_coverage=1 00:08:39.481 --rc genhtml_function_coverage=1 00:08:39.481 --rc genhtml_legend=1 00:08:39.481 --rc geninfo_all_blocks=1 00:08:39.481 --rc geninfo_unexecuted_blocks=1 00:08:39.481 00:08:39.481 ' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.481 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.482 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.742 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:39.742 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:39.742 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.742 10:48:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:47.882 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:47.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.882 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:47.883 Found net devices under 0000:31:00.0: cvl_0_0 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:47.883 Found net devices under 0000:31:00.1: cvl_0_1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:08:47.883 00:08:47.883 --- 10.0.0.2 ping statistics --- 00:08:47.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.883 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:47.883 00:08:47.883 --- 10.0.0.1 ping statistics --- 00:08:47.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.883 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1669506 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1669506 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1669506 ']' 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.883 10:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 [2024-10-09 10:49:07.005831] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:47.883 [2024-10-09 10:49:07.005899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.883 [2024-10-09 10:49:07.149404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:47.883 [2024-10-09 10:49:07.198257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.883 [2024-10-09 10:49:07.224582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.883 [2024-10-09 10:49:07.224626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.883 [2024-10-09 10:49:07.224634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.883 [2024-10-09 10:49:07.224642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.883 [2024-10-09 10:49:07.224648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.883 [2024-10-09 10:49:07.225441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.883 [2024-10-09 10:49:07.873875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.883 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.144 Malloc0 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.144 [2024-10-09 10:49:07.934986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1669782 00:08:48.144 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1669782 /var/tmp/bdevperf.sock 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1669782 ']' 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.145 10:49:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.145 [2024-10-09 10:49:07.995335] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:08:48.145 [2024-10-09 10:49:07.995402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669782 ] 00:08:48.145 [2024-10-09 10:49:08.130037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.405 [2024-10-09 10:49:08.163952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.405 [2024-10-09 10:49:08.187794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.978 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.978 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:48.978 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:48.978 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.978 10:49:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.262 NVMe0n1 00:08:49.262 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.262 10:49:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:49.262 Running I/O for 10 seconds... 00:08:51.142 8192.00 IOPS, 32.00 MiB/s [2024-10-09T08:49:12.526Z] 9731.50 IOPS, 38.01 MiB/s [2024-10-09T08:49:13.466Z] 10247.00 IOPS, 40.03 MiB/s [2024-10-09T08:49:14.407Z] 10537.75 IOPS, 41.16 MiB/s [2024-10-09T08:49:15.346Z] 10696.00 IOPS, 41.78 MiB/s [2024-10-09T08:49:16.288Z] 10855.17 IOPS, 42.40 MiB/s [2024-10-09T08:49:17.329Z] 10973.00 IOPS, 42.86 MiB/s [2024-10-09T08:49:18.427Z] 11060.25 IOPS, 43.20 MiB/s [2024-10-09T08:49:19.375Z] 11134.44 IOPS, 43.49 MiB/s [2024-10-09T08:49:19.375Z] 11162.50 IOPS, 43.60 MiB/s 00:08:59.373 Latency(us) 00:08:59.373 [2024-10-09T08:49:19.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.373 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:59.373 Verification LBA range: start 0x0 length 0x4000 00:08:59.373 NVMe0n1 : 10.07 11184.45 43.69 0.00 0.00 91229.56 24742.96 84958.13 00:08:59.373 [2024-10-09T08:49:19.375Z] =================================================================================================================== 00:08:59.373 [2024-10-09T08:49:19.375Z] Total : 11184.45 43.69 0.00 0.00 91229.56 24742.96 84958.13 00:08:59.373 { 00:08:59.373 "results": [ 00:08:59.373 { 00:08:59.373 "job": "NVMe0n1", 00:08:59.373 "core_mask": "0x1", 00:08:59.373 "workload": "verify", 00:08:59.373 "status": "finished", 00:08:59.373 "verify_range": { 00:08:59.373 "start": 0, 00:08:59.373 "length": 16384 00:08:59.373 }, 00:08:59.373 "queue_depth": 1024, 00:08:59.373 "io_size": 4096, 00:08:59.373 "runtime": 10.071932, 00:08:59.373 "iops": 11184.448028441813, 00:08:59.373 "mibps": 43.68925011110083, 00:08:59.373 "io_failed": 0, 00:08:59.373 "io_timeout": 0, 00:08:59.373 "avg_latency_us": 91229.56226887704, 00:08:59.373 "min_latency_us": 24742.96024056131, 00:08:59.373 "max_latency_us": 84958.12896759105 00:08:59.373 } 00:08:59.373 ], 00:08:59.373 "core_count": 1 00:08:59.373 } 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1669782 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1669782 ']' 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1669782 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1669782 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1669782' 00:08:59.373 killing process with pid 1669782 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1669782 00:08:59.373 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.373 00:08:59.373 Latency(us) 00:08:59.373 [2024-10-09T08:49:19.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.373 [2024-10-09T08:49:19.375Z] =================================================================================================================== 00:08:59.373 [2024-10-09T08:49:19.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.373 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1669782 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.634 rmmod nvme_tcp 00:08:59.634 rmmod nvme_fabrics 00:08:59.634 rmmod nvme_keyring 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1669506 ']' 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1669506 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1669506 ']' 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1669506 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1669506 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1669506' 00:08:59.634 killing process with pid 1669506 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1669506 00:08:59.634 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1669506 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.895 10:49:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.808 00:09:01.808 real 0m22.480s 00:09:01.808 user 0m25.684s 00:09:01.808 sys 0m6.938s 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 ************************************ 00:09:01.808 END TEST nvmf_queue_depth 00:09:01.808 ************************************ 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.808 ************************************ 00:09:01.808 START TEST nvmf_target_multipath 00:09:01.808 ************************************ 00:09:01.808 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:02.069 * Looking for test storage... 00:09:02.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.069 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.070 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:02.070 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:02.070 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.070 10:49:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.070 --rc genhtml_branch_coverage=1 00:09:02.070 --rc genhtml_function_coverage=1 00:09:02.070 --rc genhtml_legend=1 00:09:02.070 --rc geninfo_all_blocks=1 00:09:02.070 --rc geninfo_unexecuted_blocks=1 00:09:02.070 00:09:02.070 ' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.070 --rc genhtml_branch_coverage=1 00:09:02.070 --rc genhtml_function_coverage=1 00:09:02.070 --rc genhtml_legend=1 00:09:02.070 --rc geninfo_all_blocks=1 00:09:02.070 --rc geninfo_unexecuted_blocks=1 00:09:02.070 00:09:02.070 ' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.070 --rc genhtml_branch_coverage=1 00:09:02.070 --rc genhtml_function_coverage=1 00:09:02.070 --rc genhtml_legend=1 00:09:02.070 --rc geninfo_all_blocks=1 00:09:02.070 --rc geninfo_unexecuted_blocks=1 00:09:02.070 00:09:02.070 ' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:02.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.070 --rc genhtml_branch_coverage=1 00:09:02.070 --rc genhtml_function_coverage=1 00:09:02.070 --rc genhtml_legend=1 00:09:02.070 --rc geninfo_all_blocks=1 00:09:02.070 --rc geninfo_unexecuted_blocks=1 00:09:02.070 00:09:02.070 ' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.070 10:49:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:10.197 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:10.198 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:10.198 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:10.198 Found net devices under 0000:31:00.0: cvl_0_0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:10.198 Found net devices under 0000:31:00.1: cvl_0_1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:10.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:09:10.198 00:09:10.198 --- 10.0.0.2 ping statistics --- 00:09:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.198 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:10.198 00:09:10.198 --- 10.0.0.1 ping statistics --- 00:09:10.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.198 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:10.198 only one NIC for nvmf test 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.198 rmmod nvme_tcp 00:09:10.198 rmmod nvme_fabrics 00:09:10.198 rmmod nvme_keyring 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:10.198 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.199 10:49:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.108 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:12.109 00:09:12.109 real 0m9.981s 00:09:12.109 user 0m2.204s 00:09:12.109 sys 0m5.690s 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.109 ************************************ 00:09:12.109 END TEST nvmf_target_multipath 00:09:12.109 ************************************ 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.109 ************************************ 00:09:12.109 START TEST nvmf_zcopy 00:09:12.109 ************************************ 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:12.109 * Looking for test storage... 00:09:12.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.109 10:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.109 --rc genhtml_branch_coverage=1 00:09:12.109 --rc genhtml_function_coverage=1 00:09:12.109 --rc genhtml_legend=1 00:09:12.109 --rc geninfo_all_blocks=1 00:09:12.109 --rc geninfo_unexecuted_blocks=1 00:09:12.109 00:09:12.109 ' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.109 --rc genhtml_branch_coverage=1 00:09:12.109 --rc genhtml_function_coverage=1 00:09:12.109 --rc genhtml_legend=1 00:09:12.109 --rc geninfo_all_blocks=1 00:09:12.109 --rc geninfo_unexecuted_blocks=1 00:09:12.109 00:09:12.109 ' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.109 --rc genhtml_branch_coverage=1 00:09:12.109 --rc genhtml_function_coverage=1 00:09:12.109 --rc genhtml_legend=1 00:09:12.109 --rc geninfo_all_blocks=1 00:09:12.109 --rc geninfo_unexecuted_blocks=1 00:09:12.109 00:09:12.109 ' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.109 --rc genhtml_branch_coverage=1 00:09:12.109 --rc genhtml_function_coverage=1 00:09:12.109 --rc genhtml_legend=1 00:09:12.109 --rc geninfo_all_blocks=1 00:09:12.109 --rc geninfo_unexecuted_blocks=1 00:09:12.109 00:09:12.109 ' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.109 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.110 10:49:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:20.245 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:20.245 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:20.245 Found net devices under 0000:31:00.0: cvl_0_0 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:20.245 Found net devices under 0000:31:00.1: cvl_0_1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:20.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:09:20.245 00:09:20.245 --- 10.0.0.2 ping statistics --- 00:09:20.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.245 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:09:20.245 00:09:20.245 --- 10.0.0.1 ping statistics --- 00:09:20.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.245 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:09:20.245 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1680707 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1680707 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1680707 ']' 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.246 10:49:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.246 [2024-10-09 10:49:39.754496] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:09:20.246 [2024-10-09 10:49:39.754550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.246 [2024-10-09 10:49:39.892562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:20.246 [2024-10-09 10:49:39.941610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.246 [2024-10-09 10:49:39.967475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.246 [2024-10-09 10:49:39.967521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.246 [2024-10-09 10:49:39.967530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.246 [2024-10-09 10:49:39.967537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.246 [2024-10-09 10:49:39.967543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.246 [2024-10-09 10:49:39.968300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 [2024-10-09 10:49:40.604149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 [2024-10-09 10:49:40.628393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 malloc0 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:20.817 { 00:09:20.817 "params": { 00:09:20.817 "name": "Nvme$subsystem", 00:09:20.817 "trtype": "$TEST_TRANSPORT", 00:09:20.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.817 "adrfam": "ipv4", 00:09:20.817 "trsvcid": "$NVMF_PORT", 00:09:20.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.817 "hdgst": ${hdgst:-false}, 00:09:20.817 "ddgst": ${ddgst:-false} 00:09:20.817 }, 00:09:20.817 "method": "bdev_nvme_attach_controller" 00:09:20.817 } 00:09:20.817 EOF 00:09:20.817 )") 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:20.817 10:49:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:20.817 "params": { 00:09:20.817 "name": "Nvme1", 00:09:20.817 "trtype": "tcp", 00:09:20.817 "traddr": "10.0.0.2", 00:09:20.817 "adrfam": "ipv4", 00:09:20.817 "trsvcid": "4420", 00:09:20.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.817 "hdgst": false, 00:09:20.817 "ddgst": false 00:09:20.817 }, 00:09:20.817 "method": "bdev_nvme_attach_controller" 00:09:20.817 }' 00:09:20.817 [2024-10-09 10:49:40.729991] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:09:20.817 [2024-10-09 10:49:40.730053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1680968 ] 00:09:21.077 [2024-10-09 10:49:40.864180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:21.077 [2024-10-09 10:49:40.896155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.077 [2024-10-09 10:49:40.919417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.337 Running I/O for 10 seconds... 00:09:23.227 6640.00 IOPS, 51.88 MiB/s [2024-10-09T08:49:44.168Z] 6702.50 IOPS, 52.36 MiB/s [2024-10-09T08:49:45.549Z] 6707.00 IOPS, 52.40 MiB/s [2024-10-09T08:49:46.492Z] 6753.00 IOPS, 52.76 MiB/s [2024-10-09T08:49:47.431Z] 7348.40 IOPS, 57.41 MiB/s [2024-10-09T08:49:48.370Z] 7747.17 IOPS, 60.52 MiB/s [2024-10-09T08:49:49.311Z] 8029.00 IOPS, 62.73 MiB/s [2024-10-09T08:49:50.251Z] 8242.50 IOPS, 64.39 MiB/s [2024-10-09T08:49:51.192Z] 8406.78 IOPS, 65.68 MiB/s [2024-10-09T08:49:51.192Z] 8538.60 IOPS, 66.71 MiB/s 00:09:31.190 Latency(us) 00:09:31.190 [2024-10-09T08:49:51.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.190 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:31.190 Verification LBA range: start 0x0 length 0x1000 00:09:31.190 Nvme1n1 : 10.01 8541.88 66.73 0.00 0.00 14932.60 2285.44 27808.46 00:09:31.190 [2024-10-09T08:49:51.192Z] =================================================================================================================== 00:09:31.190 [2024-10-09T08:49:51.192Z] Total : 8541.88 66.73 0.00 0.00 14932.60 2285.44 27808.46 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1682985 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:31.450 { 00:09:31.450 "params": { 00:09:31.450 "name": "Nvme$subsystem", 00:09:31.450 "trtype": "$TEST_TRANSPORT", 00:09:31.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.450 "adrfam": "ipv4", 00:09:31.450 "trsvcid": "$NVMF_PORT", 00:09:31.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.450 "hdgst": ${hdgst:-false}, 00:09:31.450 "ddgst": ${ddgst:-false} 00:09:31.450 }, 00:09:31.450 "method": "bdev_nvme_attach_controller" 00:09:31.450 } 00:09:31.450 EOF 00:09:31.450 )") 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:31.450 [2024-10-09 10:49:51.272048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.272080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:31.450 10:49:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:31.450 "params": { 00:09:31.450 "name": "Nvme1", 00:09:31.450 "trtype": "tcp", 00:09:31.450 "traddr": "10.0.0.2", 00:09:31.450 "adrfam": "ipv4", 00:09:31.450 "trsvcid": "4420", 00:09:31.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.450 "hdgst": false, 00:09:31.450 "ddgst": false 00:09:31.450 }, 00:09:31.450 "method": "bdev_nvme_attach_controller" 00:09:31.450 }' 00:09:31.450 [2024-10-09 10:49:51.284017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.284025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.296017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.296026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.308019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.308027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.314395] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:09:31.450 [2024-10-09 10:49:51.314449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1682985 ] 00:09:31.450 [2024-10-09 10:49:51.320023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.320032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.332027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.332035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.344031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.344040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.356034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.356042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.368037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.368045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.380039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.380048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.392043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.392052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.404046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.404055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.416048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.416061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.428051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.450 [2024-10-09 10:49:51.428060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.450 [2024-10-09 10:49:51.440054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.451 [2024-10-09 10:49:51.440062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.451 [2024-10-09 10:49:51.444372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:31.710 [2024-10-09 10:49:51.452058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.452067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.464062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.464070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.476065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.476073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.477422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.710 [2024-10-09 10:49:51.488067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.488076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.494714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.710 [2024-10-09 10:49:51.500069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.500078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.512077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.512089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.524077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.524090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.536076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.536085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.548080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.548090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.560081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.560089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.572096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.572114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.584090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.584100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.596094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.596105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.608092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.608100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.620093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.620105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.632100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.632110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.644102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.644112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.656104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.656112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.710 [2024-10-09 10:49:51.668107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.710 [2024-10-09 10:49:51.668115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.711 [2024-10-09 10:49:51.680109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.711 [2024-10-09 10:49:51.680117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.711 [2024-10-09 10:49:51.692116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.711 [2024-10-09 10:49:51.692126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.711 [2024-10-09 10:49:51.704117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.711 [2024-10-09 10:49:51.704125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.716118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.716127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.728122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.728132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.740126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.740134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.752128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.752136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.764130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.764138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.776133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.776141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.788147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.788163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 Running I/O for 5 seconds... 00:09:31.971 [2024-10-09 10:49:51.800139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.800147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.815397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.815414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.829024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.829041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.842873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.842890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.856256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.856277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.869110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.869127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.881416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.881432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.894767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.894783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.907850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.907867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.920398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.920414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.933223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.933239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.946235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.946251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.971 [2024-10-09 10:49:51.959505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.971 [2024-10-09 10:49:51.959521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:51.973216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:51.973233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:51.986252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:51.986267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:51.999750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:51.999766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:52.012567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:52.012583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:52.025342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:52.025358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:52.039016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.230 [2024-10-09 10:49:52.039032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.230 [2024-10-09 10:49:52.052235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.052251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.065240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.065256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.077931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.077947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.090413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.090429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.103073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.103094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.115977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.115992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.129280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.129296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.142521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.142537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.155357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.155372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.167767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.167783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.180316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.180331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.192672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.192687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.205963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.205979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.219601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.219616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.231 [2024-10-09 10:49:52.232054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.231 [2024-10-09 10:49:52.232069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.244588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.244604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.257083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.257099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.270460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.270479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.282997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.283012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.296047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.296062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.309131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.309146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.490 [2024-10-09 10:49:52.322486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.490 [2024-10-09 10:49:52.322501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.334579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.334594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.347586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.347602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.360038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.360053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.373506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.373521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.386424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.386439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.399237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.399253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.412446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.412461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.425876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.425891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.438645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.438660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.451773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.451788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.464324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.464339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.477981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.477996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.491 [2024-10-09 10:49:52.491397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.491 [2024-10-09 10:49:52.491412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.504433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.504448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.517402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.517418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.530813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.530829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.543604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.543620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.556966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.556982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.570204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.570219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.582939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.582955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.595744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.595760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.608899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.608915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.621586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.621601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.634373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.634388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.750 [2024-10-09 10:49:52.647770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.750 [2024-10-09 10:49:52.647785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.660445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.660460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.672943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.672958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.685632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.685647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.698029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.698044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.711720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.711735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.725142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.725157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.738117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.738133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.751 [2024-10-09 10:49:52.751738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.751 [2024-10-09 10:49:52.751753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.765492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.765507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.778615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.778632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.791448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.791463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 18978.00 IOPS, 148.27 MiB/s [2024-10-09T08:49:53.012Z] [2024-10-09 10:49:52.803471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.803487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.817256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.817271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.830590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.830605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.843839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.843855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.856635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.856651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.869767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.869782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.883211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.883227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.896839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.896855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.910344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.910359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.922863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.922878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.936134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.010 [2024-10-09 10:49:52.936149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.010 [2024-10-09 10:49:52.949104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.011 [2024-10-09 10:49:52.949120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.011 [2024-10-09 10:49:52.961739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.011 [2024-10-09 10:49:52.961755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.011 [2024-10-09 10:49:52.975430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.011 [2024-10-09 10:49:52.975445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.011 [2024-10-09 10:49:52.988713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.011 [2024-10-09 10:49:52.988728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.011 [2024-10-09 10:49:53.002141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.011 [2024-10-09 10:49:53.002157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.015399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.015415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.028283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.028298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.040883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.040898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.054336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.054351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.067245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.067261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.080050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.080071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.093237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.093253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.106010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.106027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.119411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.119427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.132135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.132151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.144845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.144860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.157469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.157485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.170080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.170096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.183168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.183184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.195995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.196010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.209327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.270 [2024-10-09 10:49:53.209342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.270 [2024-10-09 10:49:53.222544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.271 [2024-10-09 10:49:53.222559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.271 [2024-10-09 10:49:53.235091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.271 [2024-10-09 10:49:53.235107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.271 [2024-10-09 10:49:53.248431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.271 [2024-10-09 10:49:53.248446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.271 [2024-10-09 10:49:53.261632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.271 [2024-10-09 10:49:53.261648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.274860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.274877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.287285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.287302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.299879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.299895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.313213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.313229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.325756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.325776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.338424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.338440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.351384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.351400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.364211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.364227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.377526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.377546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.390740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.390756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.404168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.404185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.417663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.417678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.431350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.431365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.444739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.444755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.458107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.458123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.470995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.471011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.484280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.484296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.497549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.497564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.511023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.511039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.531 [2024-10-09 10:49:53.524335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.531 [2024-10-09 10:49:53.524351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.538224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.538243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.551728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.551744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.565320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.565336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.578032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.578052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.591208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.591224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.604553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.604569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.616900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.616917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.629227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.629242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.641784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.641800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.654678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.654695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.667832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.667848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.680870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.680885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.693674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.693690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.707182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.707198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.720721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.720737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.733676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.733692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.746588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.746603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.759429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.759445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.773009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.773024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.792 [2024-10-09 10:49:53.786298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.792 [2024-10-09 10:49:53.786314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.799594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.799610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 19056.50 IOPS, 148.88 MiB/s [2024-10-09T08:49:54.055Z] [2024-10-09 10:49:53.813043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.813058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.825538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.825554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.838903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.838918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.852315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.852331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.865200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.865215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.877987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.878002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.891691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.891707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.904526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.904542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.917613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.917629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.930965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.930981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.943877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.943892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.956316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.956332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.969592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.969607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.983089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.983104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:53.996101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:53.996116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:54.009136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:54.009152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:54.021776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:54.021791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:54.035233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:54.035249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.053 [2024-10-09 10:49:54.047897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.053 [2024-10-09 10:49:54.047912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.061068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.061084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.074895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.074910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.087724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.087739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.101192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.101208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.114148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.114164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.127143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.127158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.140413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.140429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.153785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.153800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.166215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.166231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.179573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.179588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.192386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.192402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.205212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.205228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.218397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.218412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.231822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.231838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.244610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.244626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.258030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.258046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.270988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.271004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.283995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.284010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.296629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.296645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.313 [2024-10-09 10:49:54.309951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.313 [2024-10-09 10:49:54.309967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.323688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.323704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.334949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.334964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.348661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.348677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.361786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.361802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.374866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.374881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.388276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.388292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.401327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.401342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.413626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.413642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.426645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.426660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.439842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.439857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.452168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.452184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.465178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.465194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.477885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.477900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.490475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.490490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.503019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.503034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.516334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.516350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.529211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.529226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.542503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.542518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.555774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.555789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.574 [2024-10-09 10:49:54.568289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.574 [2024-10-09 10:49:54.568305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.581631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.581647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.594222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.594237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.606895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.606911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.619810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.619825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.632692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.632708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.645662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.645678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.658439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.658455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.671756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.671771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.685252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.685267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.698566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.698581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.711437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.711452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.724835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.724850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.738515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.738530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.751552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.751567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.764876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.764892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.777741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.777758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.790891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.790907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 19104.33 IOPS, 149.25 MiB/s [2024-10-09T08:49:54.836Z] [2024-10-09 10:49:54.804711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.804731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.817630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.817646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.834 [2024-10-09 10:49:54.830243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.834 [2024-10-09 10:49:54.830259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.842786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.842802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.856027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.856042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.869602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.869618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.883445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.883461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.895918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.895934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.908939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.908956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.922588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.922604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.935815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.935831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.948210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.948225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.960792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.960808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.973421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.973437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.985921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.985937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:54.998672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:54.998687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.012176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.012192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.025760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.025775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.039075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.039090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.052451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.052476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.065991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.066007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.079339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.079355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.095 [2024-10-09 10:49:55.093086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.095 [2024-10-09 10:49:55.093102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.105721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.105737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.118514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.118529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.131990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.132005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.145306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.145322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.158053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.158068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.170793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.170808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.184023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.184039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.197339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.197355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.210126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.210142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.223948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.223963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.236628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.236643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.249701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.249716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.262938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.262954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.276216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.276232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.288518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.288534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.301182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.301202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.313681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.313697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.327190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.327206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.339798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.339813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.356 [2024-10-09 10:49:55.353172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.356 [2024-10-09 10:49:55.353188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.366517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.366532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.379803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.379818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.392234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.392250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.405593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.405608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.418608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.418624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.431213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.431229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.443952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.443967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.457548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.457564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.470322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.470338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.483683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.483699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.496831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.496846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.509605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.509620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.522245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.522261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.535399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.535415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.549088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.549104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.562191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.562206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.575567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.575582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.617 [2024-10-09 10:49:55.589326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.617 [2024-10-09 10:49:55.589341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.618 [2024-10-09 10:49:55.601782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.618 [2024-10-09 10:49:55.601797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.618 [2024-10-09 10:49:55.615138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.618 [2024-10-09 10:49:55.615153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.628336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.628351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.641026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.641043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.653596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.653611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.666298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.666313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.678884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.678899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.691632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.691647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.704578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.704594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.717548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.717563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.730735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.730750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.743967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.743983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.757405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.757421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.770269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.770284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.783323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.783339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.796290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.796306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 19115.25 IOPS, 149.34 MiB/s [2024-10-09T08:49:55.881Z] [2024-10-09 10:49:55.809155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.809171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.821732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.821748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.835388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.835403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.847756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.847771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.860353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.860368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.879 [2024-10-09 10:49:55.873586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.879 [2024-10-09 10:49:55.873602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.887264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.887280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.901049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.901065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.914318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.914333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.927600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.927616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.940040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.940055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.952386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.952401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.965847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.965862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.978887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.978902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:55.991337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:55.991353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.003989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.004005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.017155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.017171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.029835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.029858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.043021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.043038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.056096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.056112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.068803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.068818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.139 [2024-10-09 10:49:56.081152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.139 [2024-10-09 10:49:56.081167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.140 [2024-10-09 10:49:56.093885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.140 [2024-10-09 10:49:56.093901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.140 [2024-10-09 10:49:56.106631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.140 [2024-10-09 10:49:56.106646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.140 [2024-10-09 10:49:56.119969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.140 [2024-10-09 10:49:56.119985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.140 [2024-10-09 10:49:56.132894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.140 [2024-10-09 10:49:56.132909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.399 [2024-10-09 10:49:56.146593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.399 [2024-10-09 10:49:56.146609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.399 [2024-10-09 10:49:56.159843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.399 [2024-10-09 10:49:56.159858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.399 [2024-10-09 10:49:56.173190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.399 [2024-10-09 10:49:56.173206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.399 [2024-10-09 10:49:56.186774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.186789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.199547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.199563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.212727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.212742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.225711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.225726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.238767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.238782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.251962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.251977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.265496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.265511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.278293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.278313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.291541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.291556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.304706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.304722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.318204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.318219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.330968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.330984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.343685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.343700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.357573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.357588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.370207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.370221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.383499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.383514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.400 [2024-10-09 10:49:56.396534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.400 [2024-10-09 10:49:56.396549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.409142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.409157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.422532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.422547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.434828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.434844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.447206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.447221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.460288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.460304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.473868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.473883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.487145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.487161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.500636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.500651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.512954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.512970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.526869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.526889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.540274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.540290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.553212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.553228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.566165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.566181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.578551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.578567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.592055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.592070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.605031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.605047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.618159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.618175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.630606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.630622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.643128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.643144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.660 [2024-10-09 10:49:56.656132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.660 [2024-10-09 10:49:56.656148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.669533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.669549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.683003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.683018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.695639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.695655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.708384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.708400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.721897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.721913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.734652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.734668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.748263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.748279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.761064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.761079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.774233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.774254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.787679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.787694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 19129.00 IOPS, 149.45 MiB/s [2024-10-09T08:49:56.968Z] [2024-10-09 10:49:56.800573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.800588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 00:09:36.966 Latency(us) 00:09:36.966 [2024-10-09T08:49:56.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.966 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:36.966 Nvme1n1 : 5.01 19133.05 149.48 0.00 0.00 6683.39 2641.26 16422.32 00:09:36.966 [2024-10-09T08:49:56.968Z] =================================================================================================================== 00:09:36.966 [2024-10-09T08:49:56.968Z] Total : 19133.05 149.48 0.00 0.00 6683.39 2641.26 16422.32 00:09:36.966 [2024-10-09 10:49:56.809662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.809677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.821657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.821670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.833663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.833676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.966 [2024-10-09 10:49:56.845662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.966 [2024-10-09 10:49:56.845674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 [2024-10-09 10:49:56.857663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.967 [2024-10-09 10:49:56.857675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 [2024-10-09 10:49:56.869663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.967 [2024-10-09 10:49:56.869673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 [2024-10-09 10:49:56.881666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.967 [2024-10-09 10:49:56.881674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 [2024-10-09 10:49:56.893671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.967 [2024-10-09 10:49:56.893683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 [2024-10-09 10:49:56.905670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.967 [2024-10-09 10:49:56.905679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1682985) - No such process 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1682985 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.967 delay0 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.967 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:37.258 [2024-10-09 10:49:57.193628] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:45.395 Initializing NVMe Controllers 00:09:45.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:45.395 Initialization complete. Launching workers. 00:09:45.395 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 288, failed: 15829 00:09:45.395 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16017, failed to submit 100 00:09:45.395 success 15890, unsuccessful 127, failed 0 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.395 rmmod nvme_tcp 00:09:45.395 rmmod nvme_fabrics 00:09:45.395 rmmod nvme_keyring 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1680707 ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1680707 ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1680707' 00:09:45.395 killing process with pid 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1680707 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.395 10:50:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.777 00:09:46.777 real 0m34.679s 00:09:46.777 user 0m45.997s 00:09:46.777 sys 0m11.210s 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.777 ************************************ 00:09:46.777 END TEST nvmf_zcopy 00:09:46.777 ************************************ 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.777 ************************************ 00:09:46.777 START TEST nvmf_nmic 00:09:46.777 ************************************ 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:46.777 * Looking for test storage... 00:09:46.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.777 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.037 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.038 --rc genhtml_branch_coverage=1 00:09:47.038 --rc genhtml_function_coverage=1 00:09:47.038 --rc genhtml_legend=1 00:09:47.038 --rc geninfo_all_blocks=1 00:09:47.038 --rc geninfo_unexecuted_blocks=1 00:09:47.038 00:09:47.038 ' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.038 --rc genhtml_branch_coverage=1 00:09:47.038 --rc genhtml_function_coverage=1 00:09:47.038 --rc genhtml_legend=1 00:09:47.038 --rc geninfo_all_blocks=1 00:09:47.038 --rc geninfo_unexecuted_blocks=1 00:09:47.038 00:09:47.038 ' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.038 --rc genhtml_branch_coverage=1 00:09:47.038 --rc genhtml_function_coverage=1 00:09:47.038 --rc genhtml_legend=1 00:09:47.038 --rc geninfo_all_blocks=1 00:09:47.038 --rc geninfo_unexecuted_blocks=1 00:09:47.038 00:09:47.038 ' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.038 --rc genhtml_branch_coverage=1 00:09:47.038 --rc genhtml_function_coverage=1 00:09:47.038 --rc genhtml_legend=1 00:09:47.038 --rc geninfo_all_blocks=1 00:09:47.038 --rc geninfo_unexecuted_blocks=1 00:09:47.038 00:09:47.038 ' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.038 10:50:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:55.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.175 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:55.176 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:55.176 Found net devices under 0000:31:00.0: cvl_0_0 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:55.176 Found net devices under 0000:31:00.1: cvl_0_1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:09:55.176 00:09:55.176 --- 10.0.0.2 ping statistics --- 00:09:55.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.176 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:55.176 00:09:55.176 --- 10.0.0.1 ping statistics --- 00:09:55.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.176 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1689997 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1689997 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1689997 ']' 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.176 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.176 [2024-10-09 10:50:14.440495] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:09:55.176 [2024-10-09 10:50:14.440562] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.176 [2024-10-09 10:50:14.581454] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:55.176 [2024-10-09 10:50:14.613776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.176 [2024-10-09 10:50:14.637951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.176 [2024-10-09 10:50:14.637991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.176 [2024-10-09 10:50:14.637999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.176 [2024-10-09 10:50:14.638006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.176 [2024-10-09 10:50:14.638012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.176 [2024-10-09 10:50:14.639965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.176 [2024-10-09 10:50:14.640084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.176 [2024-10-09 10:50:14.640242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.176 [2024-10-09 10:50:14.640243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 [2024-10-09 10:50:15.300570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 Malloc0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 [2024-10-09 10:50:15.367647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:55.438 test case1: single bdev can't be used in multiple subsystems 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 [2024-10-09 10:50:15.403473] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:55.438 [2024-10-09 10:50:15.403494] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:55.438 [2024-10-09 10:50:15.403503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.438 request: 00:09:55.438 { 00:09:55.438 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:55.438 "namespace": { 00:09:55.438 "bdev_name": "Malloc0", 00:09:55.438 "no_auto_visible": false 00:09:55.438 }, 00:09:55.438 "method": "nvmf_subsystem_add_ns", 00:09:55.438 "req_id": 1 00:09:55.438 } 00:09:55.438 Got JSON-RPC error response 00:09:55.438 response: 00:09:55.438 { 00:09:55.438 "code": -32602, 00:09:55.438 "message": "Invalid parameters" 00:09:55.438 } 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:55.438 Adding namespace failed - expected result. 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:55.438 test case2: host connect to nvmf target in multiple paths 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 [2024-10-09 10:50:15.415605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.438 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.349 10:50:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:58.758 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.758 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:58.758 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.758 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:58.758 10:50:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:00.670 10:50:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.670 [global] 00:10:00.670 thread=1 00:10:00.670 invalidate=1 00:10:00.670 rw=write 00:10:00.670 time_based=1 00:10:00.670 runtime=1 00:10:00.670 ioengine=libaio 00:10:00.670 direct=1 00:10:00.670 bs=4096 00:10:00.670 iodepth=1 00:10:00.670 norandommap=0 00:10:00.670 numjobs=1 00:10:00.670 00:10:00.670 verify_dump=1 00:10:00.670 verify_backlog=512 00:10:00.670 verify_state_save=0 00:10:00.670 do_verify=1 00:10:00.670 verify=crc32c-intel 00:10:00.670 [job0] 00:10:00.670 filename=/dev/nvme0n1 00:10:00.670 Could not set queue depth (nvme0n1) 00:10:01.237 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.237 fio-3.35 00:10:01.237 Starting 1 thread 00:10:02.177 00:10:02.177 job0: (groupid=0, jobs=1): err= 0: pid=1691373: Wed Oct 9 10:50:22 2024 00:10:02.177 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:02.177 slat (nsec): min=7954, max=57142, avg=25422.85, stdev=2958.47 00:10:02.177 clat (usec): min=599, max=1262, avg=1072.56, stdev=93.46 00:10:02.177 lat (usec): min=625, max=1287, avg=1097.98, stdev=93.40 00:10:02.177 clat percentiles (usec): 00:10:02.177 | 1.00th=[ 791], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 1012], 00:10:02.177 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:02.177 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:10:02.177 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:10:02.177 | 99.99th=[ 1270] 00:10:02.177 write: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec); 0 zone resets 00:10:02.177 slat (nsec): min=9487, max=65844, avg=27988.03, stdev=10275.71 00:10:02.177 clat (usec): min=123, max=876, avg=572.11, stdev=101.84 00:10:02.177 lat (usec): min=134, max=909, avg=600.10, stdev=106.28 00:10:02.177 clat percentiles (usec): 00:10:02.177 | 1.00th=[ 302], 5.00th=[ 396], 10.00th=[ 437], 20.00th=[ 490], 00:10:02.177 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 603], 00:10:02.177 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 725], 00:10:02.177 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 881], 99.95th=[ 881], 00:10:02.177 | 99.99th=[ 881] 00:10:02.177 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:02.177 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:02.177 lat (usec) : 250=0.24%, 500=13.46%, 750=43.72%, 1000=8.56% 00:10:02.177 lat (msec) : 2=34.01% 00:10:02.177 cpu : usr=2.20%, sys=3.00%, ctx=1226, majf=0, minf=1 00:10:02.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.177 issued rwts: total=512,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.177 00:10:02.177 Run status group 0 (all jobs): 00:10:02.177 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:02.177 WRITE: bw=2853KiB/s (2922kB/s), 2853KiB/s-2853KiB/s (2922kB/s-2922kB/s), io=2856KiB (2925kB), run=1001-1001msec 00:10:02.177 00:10:02.177 Disk stats (read/write): 00:10:02.177 nvme0n1: ios=562/553, merge=0/0, ticks=802/292, in_queue=1094, util=97.90% 00:10:02.177 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.437 rmmod nvme_tcp 00:10:02.437 rmmod nvme_fabrics 00:10:02.437 rmmod nvme_keyring 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1689997 ']' 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1689997 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1689997 ']' 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1689997 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689997 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689997' 00:10:02.437 killing process with pid 1689997 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1689997 00:10:02.437 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1689997 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.697 10:50:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.609 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.609 00:10:04.609 real 0m17.970s 00:10:04.610 user 0m48.736s 00:10:04.610 sys 0m6.490s 00:10:04.610 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.610 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.610 ************************************ 00:10:04.610 END TEST nvmf_nmic 00:10:04.610 ************************************ 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.870 ************************************ 00:10:04.870 START TEST nvmf_fio_target 00:10:04.870 ************************************ 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.870 * Looking for test storage... 00:10:04.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.870 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.131 --rc genhtml_branch_coverage=1 00:10:05.131 --rc genhtml_function_coverage=1 00:10:05.131 --rc genhtml_legend=1 00:10:05.131 --rc geninfo_all_blocks=1 00:10:05.131 --rc geninfo_unexecuted_blocks=1 00:10:05.131 00:10:05.131 ' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.131 --rc genhtml_branch_coverage=1 00:10:05.131 --rc genhtml_function_coverage=1 00:10:05.131 --rc genhtml_legend=1 00:10:05.131 --rc geninfo_all_blocks=1 00:10:05.131 --rc geninfo_unexecuted_blocks=1 00:10:05.131 00:10:05.131 ' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.131 --rc genhtml_branch_coverage=1 00:10:05.131 --rc genhtml_function_coverage=1 00:10:05.131 --rc genhtml_legend=1 00:10:05.131 --rc geninfo_all_blocks=1 00:10:05.131 --rc geninfo_unexecuted_blocks=1 00:10:05.131 00:10:05.131 ' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.131 --rc genhtml_branch_coverage=1 00:10:05.131 --rc genhtml_function_coverage=1 00:10:05.131 --rc genhtml_legend=1 00:10:05.131 --rc geninfo_all_blocks=1 00:10:05.131 --rc geninfo_unexecuted_blocks=1 00:10:05.131 00:10:05.131 ' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.131 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.132 10:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:13.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:13.269 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:13.269 Found net devices under 0000:31:00.0: cvl_0_0 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:13.269 Found net devices under 0000:31:00.1: cvl_0_1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.269 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:10:13.270 00:10:13.270 --- 10.0.0.2 ping statistics --- 00:10:13.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.270 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:10:13.270 00:10:13.270 --- 10.0.0.1 ping statistics --- 00:10:13.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.270 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1696030 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1696030 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1696030 ']' 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.270 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.270 [2024-10-09 10:50:32.606065] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:10:13.270 [2024-10-09 10:50:32.606132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.270 [2024-10-09 10:50:32.746697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:13.270 [2024-10-09 10:50:32.779051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.270 [2024-10-09 10:50:32.802194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.270 [2024-10-09 10:50:32.802232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.270 [2024-10-09 10:50:32.802241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.270 [2024-10-09 10:50:32.802248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.270 [2024-10-09 10:50:32.802254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.270 [2024-10-09 10:50:32.804296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.270 [2024-10-09 10:50:32.804434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.270 [2024-10-09 10:50:32.804594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.270 [2024-10-09 10:50:32.804704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.531 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.792 [2024-10-09 10:50:33.613679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.792 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.052 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:14.052 10:50:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.052 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.052 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.313 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:14.313 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.573 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:14.573 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:14.834 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.834 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:14.834 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.095 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:15.095 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.355 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:15.355 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:15.615 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.615 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.615 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.875 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.876 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.136 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.136 [2024-10-09 10:50:36.077947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.136 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:16.397 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:16.657 10:50:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:18.039 10:50:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:20.579 10:50:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.579 [global] 00:10:20.579 thread=1 00:10:20.579 invalidate=1 00:10:20.579 rw=write 00:10:20.579 time_based=1 00:10:20.579 runtime=1 00:10:20.579 ioengine=libaio 00:10:20.579 direct=1 00:10:20.579 bs=4096 00:10:20.579 iodepth=1 00:10:20.579 norandommap=0 00:10:20.579 numjobs=1 00:10:20.579 00:10:20.579 verify_dump=1 00:10:20.579 verify_backlog=512 00:10:20.579 verify_state_save=0 00:10:20.579 do_verify=1 00:10:20.579 verify=crc32c-intel 00:10:20.579 [job0] 00:10:20.579 filename=/dev/nvme0n1 00:10:20.579 [job1] 00:10:20.579 filename=/dev/nvme0n2 00:10:20.579 [job2] 00:10:20.579 filename=/dev/nvme0n3 00:10:20.579 [job3] 00:10:20.579 filename=/dev/nvme0n4 00:10:20.579 Could not set queue depth (nvme0n1) 00:10:20.579 Could not set queue depth (nvme0n2) 00:10:20.579 Could not set queue depth (nvme0n3) 00:10:20.579 Could not set queue depth (nvme0n4) 00:10:20.579 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.579 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.579 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.579 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.579 fio-3.35 00:10:20.579 Starting 4 threads 00:10:21.962 00:10:21.962 job0: (groupid=0, jobs=1): err= 0: pid=1697929: Wed Oct 9 10:50:41 2024 00:10:21.962 read: IOPS=298, BW=1192KiB/s (1221kB/s)(1196KiB/1003msec) 00:10:21.962 slat (nsec): min=7087, max=45887, avg=23745.05, stdev=7894.14 00:10:21.962 clat (usec): min=518, max=41997, avg=2547.66, stdev=8316.50 00:10:21.962 lat (usec): min=525, max=42023, avg=2571.41, stdev=8317.26 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[ 562], 5.00th=[ 660], 10.00th=[ 676], 20.00th=[ 725], 00:10:21.962 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 807], 00:10:21.962 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 906], 00:10:21.962 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.962 | 99.99th=[42206] 00:10:21.962 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:10:21.962 slat (nsec): min=9648, max=76263, avg=28468.63, stdev=11615.43 00:10:21.962 clat (usec): min=100, max=625, avg=417.00, stdev=86.36 00:10:21.962 lat (usec): min=111, max=659, avg=445.47, stdev=92.76 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[ 235], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 338], 00:10:21.962 | 30.00th=[ 367], 40.00th=[ 396], 50.00th=[ 429], 60.00th=[ 457], 00:10:21.962 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 519], 95.00th=[ 545], 00:10:21.962 | 99.00th=[ 594], 99.50th=[ 594], 99.90th=[ 627], 99.95th=[ 627], 00:10:21.962 | 99.99th=[ 627] 00:10:21.962 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.962 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.962 lat (usec) : 250=1.23%, 500=52.28%, 750=18.99%, 1000=25.89% 00:10:21.962 lat (msec) : 50=1.60% 00:10:21.962 cpu : usr=0.90%, sys=2.40%, ctx=812, majf=0, minf=1 00:10:21.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.962 issued rwts: total=299,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.962 job1: (groupid=0, jobs=1): err= 0: pid=1697947: Wed Oct 9 10:50:41 2024 00:10:21.962 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1026msec) 00:10:21.962 slat (nsec): min=26557, max=27336, avg=27077.50, stdev=185.84 00:10:21.962 clat (usec): min=40828, max=42659, avg=41665.30, stdev=565.16 00:10:21.962 lat (usec): min=40855, max=42686, avg=41692.38, stdev=565.15 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:21.962 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:21.962 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:21.962 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:21.962 | 99.99th=[42730] 00:10:21.962 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:21.962 slat (usec): min=9, max=41727, avg=167.19, stdev=2259.18 00:10:21.962 clat (usec): min=117, max=1033, avg=364.94, stdev=100.92 00:10:21.962 lat (usec): min=128, max=42128, avg=532.13, stdev=2266.67 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[ 159], 5.00th=[ 239], 10.00th=[ 262], 20.00th=[ 285], 00:10:21.962 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 371], 00:10:21.962 | 70.00th=[ 392], 80.00th=[ 433], 90.00th=[ 490], 95.00th=[ 553], 00:10:21.962 | 99.00th=[ 635], 99.50th=[ 873], 99.90th=[ 1037], 99.95th=[ 1037], 00:10:21.962 | 99.99th=[ 1037] 00:10:21.962 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.962 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.962 lat (usec) : 250=6.98%, 500=80.75%, 750=8.30%, 1000=0.38% 00:10:21.962 lat (msec) : 2=0.19%, 50=3.40% 00:10:21.962 cpu : usr=0.59%, sys=1.37%, ctx=533, majf=0, minf=1 00:10:21.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.962 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.962 job2: (groupid=0, jobs=1): err= 0: pid=1697951: Wed Oct 9 10:50:41 2024 00:10:21.962 read: IOPS=20, BW=81.9KiB/s (83.8kB/s)(84.0KiB/1026msec) 00:10:21.962 slat (nsec): min=25971, max=26983, avg=26413.05, stdev=281.46 00:10:21.962 clat (usec): min=727, max=42064, avg=37654.09, stdev=12264.85 00:10:21.962 lat (usec): min=753, max=42091, avg=37680.51, stdev=12264.74 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[ 725], 5.00th=[ 857], 10.00th=[40633], 20.00th=[41157], 00:10:21.962 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:21.962 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:21.962 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.962 | 99.99th=[42206] 00:10:21.962 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:21.962 slat (nsec): min=9306, max=52775, avg=30945.18, stdev=8554.28 00:10:21.962 clat (usec): min=128, max=798, avg=420.49, stdev=126.06 00:10:21.962 lat (usec): min=138, max=843, avg=451.44, stdev=127.49 00:10:21.962 clat percentiles (usec): 00:10:21.962 | 1.00th=[ 161], 5.00th=[ 239], 10.00th=[ 269], 20.00th=[ 306], 00:10:21.962 | 30.00th=[ 334], 40.00th=[ 379], 50.00th=[ 420], 60.00th=[ 453], 00:10:21.962 | 70.00th=[ 482], 80.00th=[ 523], 90.00th=[ 594], 95.00th=[ 635], 00:10:21.962 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 799], 00:10:21.962 | 99.99th=[ 799] 00:10:21.962 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.962 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.962 lat (usec) : 250=5.82%, 500=66.42%, 750=23.26%, 1000=0.94% 00:10:21.962 lat (msec) : 50=3.56% 00:10:21.963 cpu : usr=0.98%, sys=2.05%, ctx=533, majf=0, minf=1 00:10:21.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.963 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.963 job3: (groupid=0, jobs=1): err= 0: pid=1697952: Wed Oct 9 10:50:41 2024 00:10:21.963 read: IOPS=599, BW=2398KiB/s (2455kB/s)(2400KiB/1001msec) 00:10:21.963 slat (nsec): min=6992, max=61741, avg=24964.39, stdev=7614.50 00:10:21.963 clat (usec): min=265, max=1007, avg=709.69, stdev=120.26 00:10:21.963 lat (usec): min=273, max=1034, avg=734.65, stdev=122.78 00:10:21.963 clat percentiles (usec): 00:10:21.963 | 1.00th=[ 429], 5.00th=[ 515], 10.00th=[ 553], 20.00th=[ 603], 00:10:21.963 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 742], 00:10:21.963 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 906], 00:10:21.963 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1012], 00:10:21.963 | 99.99th=[ 1012] 00:10:21.963 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:21.963 slat (nsec): min=9767, max=68295, avg=32574.39, stdev=9833.90 00:10:21.963 clat (usec): min=117, max=890, avg=501.44, stdev=127.01 00:10:21.963 lat (usec): min=127, max=926, avg=534.01, stdev=130.76 00:10:21.963 clat percentiles (usec): 00:10:21.963 | 1.00th=[ 237], 5.00th=[ 281], 10.00th=[ 338], 20.00th=[ 396], 00:10:21.963 | 30.00th=[ 437], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 529], 00:10:21.963 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 668], 95.00th=[ 725], 00:10:21.963 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 889], 00:10:21.963 | 99.99th=[ 889] 00:10:21.963 bw ( KiB/s): min= 4096, max= 4096, per=41.04%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.963 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.963 lat (usec) : 250=0.92%, 500=32.27%, 750=50.43%, 1000=16.32% 00:10:21.963 lat (msec) : 2=0.06% 00:10:21.963 cpu : usr=2.80%, sys=4.50%, ctx=1625, majf=0, minf=1 00:10:21.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.963 issued rwts: total=600,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.963 00:10:21.963 Run status group 0 (all jobs): 00:10:21.963 READ: bw=3657KiB/s (3745kB/s), 70.2KiB/s-2398KiB/s (71.9kB/s-2455kB/s), io=3752KiB (3842kB), run=1001-1026msec 00:10:21.963 WRITE: bw=9981KiB/s (10.2MB/s), 1996KiB/s-4092KiB/s (2044kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1026msec 00:10:21.963 00:10:21.963 Disk stats (read/write): 00:10:21.963 nvme0n1: ios=319/512, merge=0/0, ticks=1521/214, in_queue=1735, util=96.39% 00:10:21.963 nvme0n2: ios=63/512, merge=0/0, ticks=1024/168, in_queue=1192, util=96.53% 00:10:21.963 nvme0n3: ios=16/512, merge=0/0, ticks=582/177, in_queue=759, util=88.36% 00:10:21.963 nvme0n4: ios=569/828, merge=0/0, ticks=765/379, in_queue=1144, util=96.47% 00:10:21.963 10:50:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:21.963 [global] 00:10:21.963 thread=1 00:10:21.963 invalidate=1 00:10:21.963 rw=randwrite 00:10:21.963 time_based=1 00:10:21.963 runtime=1 00:10:21.963 ioengine=libaio 00:10:21.963 direct=1 00:10:21.963 bs=4096 00:10:21.963 iodepth=1 00:10:21.963 norandommap=0 00:10:21.963 numjobs=1 00:10:21.963 00:10:21.963 verify_dump=1 00:10:21.963 verify_backlog=512 00:10:21.963 verify_state_save=0 00:10:21.963 do_verify=1 00:10:21.963 verify=crc32c-intel 00:10:21.963 [job0] 00:10:21.963 filename=/dev/nvme0n1 00:10:21.963 [job1] 00:10:21.963 filename=/dev/nvme0n2 00:10:21.963 [job2] 00:10:21.963 filename=/dev/nvme0n3 00:10:21.963 [job3] 00:10:21.963 filename=/dev/nvme0n4 00:10:21.963 Could not set queue depth (nvme0n1) 00:10:21.963 Could not set queue depth (nvme0n2) 00:10:21.963 Could not set queue depth (nvme0n3) 00:10:21.963 Could not set queue depth (nvme0n4) 00:10:22.226 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.226 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.226 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.226 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.226 fio-3.35 00:10:22.226 Starting 4 threads 00:10:23.611 00:10:23.611 job0: (groupid=0, jobs=1): err= 0: pid=1698454: Wed Oct 9 10:50:43 2024 00:10:23.611 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:10:23.611 slat (nsec): min=27197, max=27973, avg=27546.10, stdev=241.31 00:10:23.611 clat (usec): min=572, max=42005, avg=37701.49, stdev=12301.06 00:10:23.611 lat (usec): min=600, max=42032, avg=37729.04, stdev=12300.93 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 570], 5.00th=[ 889], 10.00th=[40633], 20.00th=[41157], 00:10:23.611 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:10:23.611 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:23.611 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.611 | 99.99th=[42206] 00:10:23.611 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:23.611 slat (nsec): min=9255, max=56298, avg=29161.86, stdev=11275.30 00:10:23.611 clat (usec): min=129, max=801, avg=413.29, stdev=130.15 00:10:23.611 lat (usec): min=164, max=836, avg=442.45, stdev=134.36 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 212], 5.00th=[ 258], 10.00th=[ 273], 20.00th=[ 293], 00:10:23.611 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 383], 60.00th=[ 433], 00:10:23.611 | 70.00th=[ 482], 80.00th=[ 537], 90.00th=[ 603], 95.00th=[ 644], 00:10:23.611 | 99.00th=[ 734], 99.50th=[ 783], 99.90th=[ 799], 99.95th=[ 799], 00:10:23.611 | 99.99th=[ 799] 00:10:23.611 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.611 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.611 lat (usec) : 250=4.32%, 500=66.79%, 750=24.20%, 1000=1.13% 00:10:23.611 lat (msec) : 50=3.56% 00:10:23.611 cpu : usr=0.88%, sys=1.96%, ctx=536, majf=0, minf=1 00:10:23.611 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.611 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.611 job1: (groupid=0, jobs=1): err= 0: pid=1698466: Wed Oct 9 10:50:43 2024 00:10:23.611 read: IOPS=1262, BW=5051KiB/s (5172kB/s)(5056KiB/1001msec) 00:10:23.611 slat (nsec): min=6159, max=60672, avg=25404.11, stdev=6546.87 00:10:23.611 clat (usec): min=150, max=930, avg=499.03, stdev=134.34 00:10:23.611 lat (usec): min=156, max=956, avg=524.44, stdev=135.15 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 225], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 371], 00:10:23.611 | 30.00th=[ 429], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 545], 00:10:23.611 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 668], 95.00th=[ 709], 00:10:23.611 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 930], 00:10:23.611 | 99.99th=[ 930] 00:10:23.611 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:23.611 slat (nsec): min=8665, max=59183, avg=19685.95, stdev=11812.26 00:10:23.611 clat (usec): min=83, max=616, avg=188.15, stdev=101.89 00:10:23.611 lat (usec): min=92, max=662, avg=207.84, stdev=110.27 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 106], 00:10:23.611 | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 133], 60.00th=[ 200], 00:10:23.611 | 70.00th=[ 225], 80.00th=[ 255], 90.00th=[ 330], 95.00th=[ 392], 00:10:23.611 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 619], 00:10:23.611 | 99.99th=[ 619] 00:10:23.611 bw ( KiB/s): min= 7520, max= 7520, per=62.67%, avg=7520.00, stdev= 0.00, samples=1 00:10:23.611 iops : min= 1880, max= 1880, avg=1880.00, stdev= 0.00, samples=1 00:10:23.611 lat (usec) : 100=6.36%, 250=38.04%, 500=32.57%, 750=21.64%, 1000=1.39% 00:10:23.611 cpu : usr=4.20%, sys=8.90%, ctx=2800, majf=0, minf=2 00:10:23.611 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 issued rwts: total=1264,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.611 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.611 job2: (groupid=0, jobs=1): err= 0: pid=1698472: Wed Oct 9 10:50:43 2024 00:10:23.611 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:10:23.611 slat (nsec): min=24640, max=25475, avg=24975.53, stdev=211.99 00:10:23.611 clat (usec): min=1114, max=43031, avg=39554.85, stdev=9918.83 00:10:23.611 lat (usec): min=1139, max=43056, avg=39579.83, stdev=9918.82 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41681], 00:10:23.611 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:23.611 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:10:23.611 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:23.611 | 99.99th=[43254] 00:10:23.611 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:23.611 slat (nsec): min=9534, max=51967, avg=28736.24, stdev=7808.62 00:10:23.611 clat (usec): min=208, max=965, avg=604.62, stdev=122.08 00:10:23.611 lat (usec): min=220, max=996, avg=633.36, stdev=124.48 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 310], 5.00th=[ 392], 10.00th=[ 453], 20.00th=[ 498], 00:10:23.611 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 635], 00:10:23.611 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807], 00:10:23.611 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:10:23.611 | 99.99th=[ 963] 00:10:23.611 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.611 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.611 lat (usec) : 250=0.19%, 500=19.28%, 750=66.92%, 1000=10.40% 00:10:23.611 lat (msec) : 2=0.19%, 50=3.02% 00:10:23.611 cpu : usr=0.90%, sys=1.30%, ctx=529, majf=0, minf=1 00:10:23.611 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.611 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.611 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.611 job3: (groupid=0, jobs=1): err= 0: pid=1698473: Wed Oct 9 10:50:43 2024 00:10:23.611 read: IOPS=16, BW=67.7KiB/s (69.3kB/s)(68.0KiB/1005msec) 00:10:23.611 slat (nsec): min=25294, max=26006, avg=25564.29, stdev=170.28 00:10:23.611 clat (usec): min=1088, max=42960, avg=39829.29, stdev=9998.01 00:10:23.611 lat (usec): min=1113, max=42985, avg=39854.85, stdev=9998.06 00:10:23.611 clat percentiles (usec): 00:10:23.611 | 1.00th=[ 1090], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41681], 00:10:23.612 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:23.612 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:10:23.612 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:23.612 | 99.99th=[42730] 00:10:23.612 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:23.612 slat (nsec): min=9584, max=63362, avg=31966.74, stdev=6030.64 00:10:23.612 clat (usec): min=181, max=942, avg=599.26, stdev=135.60 00:10:23.612 lat (usec): min=192, max=974, avg=631.22, stdev=136.61 00:10:23.612 clat percentiles (usec): 00:10:23.612 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 408], 20.00th=[ 478], 00:10:23.612 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 652], 00:10:23.612 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 791], 00:10:23.612 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:10:23.612 | 99.99th=[ 938] 00:10:23.612 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.612 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.612 lat (usec) : 250=0.19%, 500=23.82%, 750=60.68%, 1000=12.10% 00:10:23.612 lat (msec) : 2=0.19%, 50=3.02% 00:10:23.612 cpu : usr=0.90%, sys=1.49%, ctx=529, majf=0, minf=2 00:10:23.612 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.612 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.612 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.612 00:10:23.612 Run status group 0 (all jobs): 00:10:23.612 READ: bw=5152KiB/s (5276kB/s), 67.7KiB/s-5051KiB/s (69.3kB/s-5172kB/s), io=5276KiB (5403kB), run=1001-1024msec 00:10:23.612 WRITE: bw=11.7MiB/s (12.3MB/s), 2000KiB/s-6138KiB/s (2048kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1024msec 00:10:23.612 00:10:23.612 Disk stats (read/write): 00:10:23.612 nvme0n1: ios=41/512, merge=0/0, ticks=1553/152, in_queue=1705, util=96.59% 00:10:23.612 nvme0n2: ios=1055/1282, merge=0/0, ticks=445/181, in_queue=626, util=86.73% 00:10:23.612 nvme0n3: ios=13/512, merge=0/0, ticks=505/300, in_queue=805, util=88.48% 00:10:23.612 nvme0n4: ios=13/512, merge=0/0, ticks=508/282, in_queue=790, util=89.52% 00:10:23.612 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:23.612 [global] 00:10:23.612 thread=1 00:10:23.612 invalidate=1 00:10:23.612 rw=write 00:10:23.612 time_based=1 00:10:23.612 runtime=1 00:10:23.612 ioengine=libaio 00:10:23.612 direct=1 00:10:23.612 bs=4096 00:10:23.612 iodepth=128 00:10:23.612 norandommap=0 00:10:23.612 numjobs=1 00:10:23.612 00:10:23.612 verify_dump=1 00:10:23.612 verify_backlog=512 00:10:23.612 verify_state_save=0 00:10:23.612 do_verify=1 00:10:23.612 verify=crc32c-intel 00:10:23.612 [job0] 00:10:23.612 filename=/dev/nvme0n1 00:10:23.612 [job1] 00:10:23.612 filename=/dev/nvme0n2 00:10:23.612 [job2] 00:10:23.612 filename=/dev/nvme0n3 00:10:23.612 [job3] 00:10:23.612 filename=/dev/nvme0n4 00:10:23.612 Could not set queue depth (nvme0n1) 00:10:23.612 Could not set queue depth (nvme0n2) 00:10:23.612 Could not set queue depth (nvme0n3) 00:10:23.612 Could not set queue depth (nvme0n4) 00:10:23.872 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.872 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.872 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.872 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.872 fio-3.35 00:10:23.872 Starting 4 threads 00:10:25.255 00:10:25.255 job0: (groupid=0, jobs=1): err= 0: pid=1698955: Wed Oct 9 10:50:45 2024 00:10:25.255 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:25.255 slat (nsec): min=892, max=23090k, avg=107916.12, stdev=895856.56 00:10:25.255 clat (usec): min=2378, max=46669, avg=14224.03, stdev=8164.98 00:10:25.255 lat (usec): min=2380, max=46695, avg=14331.95, stdev=8247.71 00:10:25.255 clat percentiles (usec): 00:10:25.255 | 1.00th=[ 3458], 5.00th=[ 4817], 10.00th=[ 5145], 20.00th=[ 5866], 00:10:25.255 | 30.00th=[ 7242], 40.00th=[11469], 50.00th=[14877], 60.00th=[16057], 00:10:25.255 | 70.00th=[17695], 80.00th=[19006], 90.00th=[24511], 95.00th=[30802], 00:10:25.255 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[42730], 00:10:25.255 | 99.99th=[46924] 00:10:25.255 write: IOPS=4766, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1003msec); 0 zone resets 00:10:25.255 slat (nsec): min=1523, max=16936k, avg=95568.13, stdev=749542.17 00:10:25.255 clat (usec): min=978, max=59798, avg=12896.15, stdev=10173.88 00:10:25.255 lat (usec): min=988, max=59825, avg=12991.72, stdev=10241.58 00:10:25.255 clat percentiles (usec): 00:10:25.255 | 1.00th=[ 1876], 5.00th=[ 3261], 10.00th=[ 4686], 20.00th=[ 5276], 00:10:25.255 | 30.00th=[ 5866], 40.00th=[ 8029], 50.00th=[12387], 60.00th=[12780], 00:10:25.255 | 70.00th=[13698], 80.00th=[16057], 90.00th=[27919], 95.00th=[31589], 00:10:25.255 | 99.00th=[54789], 99.50th=[58983], 99.90th=[59507], 99.95th=[60031], 00:10:25.255 | 99.99th=[60031] 00:10:25.255 bw ( KiB/s): min=13232, max=24000, per=24.57%, avg=18616.00, stdev=7614.13, samples=2 00:10:25.255 iops : min= 3308, max= 6000, avg=4654.00, stdev=1903.53, samples=2 00:10:25.255 lat (usec) : 1000=0.05% 00:10:25.255 lat (msec) : 2=0.79%, 4=4.38%, 10=36.53%, 20=41.79%, 50=15.51% 00:10:25.255 lat (msec) : 100=0.95% 00:10:25.255 cpu : usr=2.89%, sys=4.89%, ctx=282, majf=0, minf=2 00:10:25.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:25.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.255 issued rwts: total=4608,4781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.255 job1: (groupid=0, jobs=1): err= 0: pid=1698973: Wed Oct 9 10:50:45 2024 00:10:25.255 read: IOPS=9629, BW=37.6MiB/s (39.4MB/s)(37.8MiB/1006msec) 00:10:25.255 slat (nsec): min=982, max=19335k, avg=50816.43, stdev=463142.79 00:10:25.255 clat (usec): min=1579, max=41433, avg=7039.21, stdev=4711.66 00:10:25.255 lat (usec): min=1589, max=46999, avg=7090.02, stdev=4744.45 00:10:25.255 clat percentiles (usec): 00:10:25.255 | 1.00th=[ 2835], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 5145], 00:10:25.255 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5932], 60.00th=[ 6325], 00:10:25.255 | 70.00th=[ 6718], 80.00th=[ 7308], 90.00th=[ 8848], 95.00th=[10159], 00:10:25.255 | 99.00th=[31065], 99.50th=[31851], 99.90th=[31851], 99.95th=[35390], 00:10:25.255 | 99.99th=[41681] 00:10:25.255 write: IOPS=9669, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1006msec); 0 zone resets 00:10:25.255 slat (nsec): min=1666, max=24187k, avg=46072.66, stdev=503097.68 00:10:25.255 clat (usec): min=1183, max=45778, avg=6089.85, stdev=4344.17 00:10:25.255 lat (usec): min=1351, max=45792, avg=6135.93, stdev=4395.41 00:10:25.255 clat percentiles (usec): 00:10:25.255 | 1.00th=[ 2008], 5.00th=[ 2999], 10.00th=[ 3392], 20.00th=[ 4146], 00:10:25.255 | 30.00th=[ 4948], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5604], 00:10:25.255 | 70.00th=[ 5735], 80.00th=[ 5866], 90.00th=[ 7111], 95.00th=[21365], 00:10:25.255 | 99.00th=[25297], 99.50th=[25822], 99.90th=[25822], 99.95th=[31851], 00:10:25.255 | 99.99th=[45876] 00:10:25.255 bw ( KiB/s): min=32768, max=45056, per=51.35%, avg=38912.00, stdev=8688.93, samples=2 00:10:25.255 iops : min= 8192, max=11264, avg=9728.00, stdev=2172.23, samples=2 00:10:25.255 lat (msec) : 2=0.53%, 4=9.96%, 10=84.24%, 20=0.75%, 50=4.52% 00:10:25.255 cpu : usr=5.97%, sys=9.75%, ctx=756, majf=0, minf=1 00:10:25.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:25.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.255 issued rwts: total=9687,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.256 job2: (groupid=0, jobs=1): err= 0: pid=1699000: Wed Oct 9 10:50:45 2024 00:10:25.256 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:10:25.256 slat (nsec): min=1000, max=17030k, avg=180845.11, stdev=1177122.66 00:10:25.256 clat (usec): min=6176, max=57335, avg=19337.71, stdev=9088.28 00:10:25.256 lat (usec): min=6185, max=57341, avg=19518.56, stdev=9188.83 00:10:25.256 clat percentiles (usec): 00:10:25.256 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11338], 20.00th=[13829], 00:10:25.256 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15926], 60.00th=[16909], 00:10:25.256 | 70.00th=[19006], 80.00th=[22938], 90.00th=[31327], 95.00th=[40109], 00:10:25.256 | 99.00th=[52167], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:10:25.256 | 99.99th=[57410] 00:10:25.256 write: IOPS=2535, BW=9.90MiB/s (10.4MB/s)(9.99MiB/1009msec); 0 zone resets 00:10:25.256 slat (nsec): min=1589, max=19779k, avg=240031.16, stdev=1058458.13 00:10:25.256 clat (usec): min=4688, max=58812, avg=34565.16, stdev=11765.06 00:10:25.256 lat (usec): min=4695, max=58821, avg=34805.19, stdev=11839.90 00:10:25.256 clat percentiles (usec): 00:10:25.256 | 1.00th=[ 6652], 5.00th=[13042], 10.00th=[19792], 20.00th=[24773], 00:10:25.256 | 30.00th=[26608], 40.00th=[29754], 50.00th=[37487], 60.00th=[40109], 00:10:25.256 | 70.00th=[42730], 80.00th=[45351], 90.00th=[48497], 95.00th=[51119], 00:10:25.256 | 99.00th=[54789], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:10:25.256 | 99.99th=[58983] 00:10:25.256 bw ( KiB/s): min= 8328, max=11120, per=12.83%, avg=9724.00, stdev=1974.24, samples=2 00:10:25.256 iops : min= 2082, max= 2780, avg=2431.00, stdev=493.56, samples=2 00:10:25.256 lat (msec) : 10=1.61%, 20=36.32%, 50=57.36%, 100=4.71% 00:10:25.256 cpu : usr=1.79%, sys=2.78%, ctx=280, majf=0, minf=1 00:10:25.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:25.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.256 issued rwts: total=2048,2558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.256 job3: (groupid=0, jobs=1): err= 0: pid=1699003: Wed Oct 9 10:50:45 2024 00:10:25.256 read: IOPS=1990, BW=7960KiB/s (8151kB/s)(7992KiB/1004msec) 00:10:25.256 slat (nsec): min=948, max=19518k, avg=265336.83, stdev=1499431.36 00:10:25.256 clat (usec): min=3111, max=59401, avg=31855.42, stdev=8652.39 00:10:25.256 lat (usec): min=11071, max=59409, avg=32120.75, stdev=8766.39 00:10:25.256 clat percentiles (usec): 00:10:25.256 | 1.00th=[15401], 5.00th=[19268], 10.00th=[22414], 20.00th=[24249], 00:10:25.256 | 30.00th=[26870], 40.00th=[29492], 50.00th=[31327], 60.00th=[32900], 00:10:25.256 | 70.00th=[35914], 80.00th=[38536], 90.00th=[43254], 95.00th=[48497], 00:10:25.256 | 99.00th=[56361], 99.50th=[56361], 99.90th=[57934], 99.95th=[59507], 00:10:25.256 | 99.99th=[59507] 00:10:25.256 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:10:25.256 slat (nsec): min=1633, max=21357k, avg=203030.13, stdev=1145760.27 00:10:25.256 clat (usec): min=1219, max=71920, avg=31118.29, stdev=16158.25 00:10:25.256 lat (usec): min=1232, max=71938, avg=31321.32, stdev=16256.64 00:10:25.256 clat percentiles (usec): 00:10:25.256 | 1.00th=[ 4178], 5.00th=[ 7373], 10.00th=[ 9241], 20.00th=[18220], 00:10:25.256 | 30.00th=[21627], 40.00th=[25035], 50.00th=[26346], 60.00th=[34866], 00:10:25.256 | 70.00th=[40633], 80.00th=[45351], 90.00th=[52167], 95.00th=[60556], 00:10:25.256 | 99.00th=[70779], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:10:25.256 | 99.99th=[71828] 00:10:25.256 bw ( KiB/s): min= 5392, max=10992, per=10.81%, avg=8192.00, stdev=3959.80, samples=2 00:10:25.256 iops : min= 1348, max= 2748, avg=2048.00, stdev=989.95, samples=2 00:10:25.256 lat (msec) : 2=0.05%, 4=0.32%, 10=5.54%, 20=10.38%, 50=75.73% 00:10:25.256 lat (msec) : 100=7.98% 00:10:25.256 cpu : usr=1.40%, sys=2.29%, ctx=261, majf=0, minf=2 00:10:25.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:25.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.256 issued rwts: total=1998,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.256 00:10:25.256 Run status group 0 (all jobs): 00:10:25.256 READ: bw=71.0MiB/s (74.5MB/s), 7960KiB/s-37.6MiB/s (8151kB/s-39.4MB/s), io=71.6MiB (75.1MB), run=1003-1009msec 00:10:25.256 WRITE: bw=74.0MiB/s (77.6MB/s), 8159KiB/s-37.8MiB/s (8355kB/s-39.6MB/s), io=74.7MiB (78.3MB), run=1003-1009msec 00:10:25.256 00:10:25.256 Disk stats (read/write): 00:10:25.256 nvme0n1: ios=3729/4096, merge=0/0, ticks=28301/28904, in_queue=57205, util=79.16% 00:10:25.256 nvme0n2: ios=7168/7221, merge=0/0, ticks=50719/44250, in_queue=94969, util=98.24% 00:10:25.256 nvme0n3: ios=1536/2015, merge=0/0, ticks=28817/66344, in_queue=95161, util=86.88% 00:10:25.256 nvme0n4: ios=1569/1839, merge=0/0, ticks=24968/31222, in_queue=56190, util=90.18% 00:10:25.256 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:25.256 [global] 00:10:25.256 thread=1 00:10:25.256 invalidate=1 00:10:25.256 rw=randwrite 00:10:25.256 time_based=1 00:10:25.256 runtime=1 00:10:25.256 ioengine=libaio 00:10:25.256 direct=1 00:10:25.256 bs=4096 00:10:25.256 iodepth=128 00:10:25.256 norandommap=0 00:10:25.256 numjobs=1 00:10:25.256 00:10:25.256 verify_dump=1 00:10:25.256 verify_backlog=512 00:10:25.256 verify_state_save=0 00:10:25.256 do_verify=1 00:10:25.256 verify=crc32c-intel 00:10:25.256 [job0] 00:10:25.256 filename=/dev/nvme0n1 00:10:25.256 [job1] 00:10:25.256 filename=/dev/nvme0n2 00:10:25.256 [job2] 00:10:25.256 filename=/dev/nvme0n3 00:10:25.256 [job3] 00:10:25.256 filename=/dev/nvme0n4 00:10:25.256 Could not set queue depth (nvme0n1) 00:10:25.256 Could not set queue depth (nvme0n2) 00:10:25.256 Could not set queue depth (nvme0n3) 00:10:25.256 Could not set queue depth (nvme0n4) 00:10:25.517 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.517 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.517 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.517 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.517 fio-3.35 00:10:25.517 Starting 4 threads 00:10:26.898 00:10:26.898 job0: (groupid=0, jobs=1): err= 0: pid=1699468: Wed Oct 9 10:50:46 2024 00:10:26.898 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:10:26.898 slat (nsec): min=1022, max=21720k, avg=176285.74, stdev=1339525.23 00:10:26.898 clat (usec): min=1464, max=78710, avg=22978.50, stdev=14819.53 00:10:26.898 lat (usec): min=1468, max=78711, avg=23154.79, stdev=14942.00 00:10:26.898 clat percentiles (usec): 00:10:26.898 | 1.00th=[ 4686], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9896], 00:10:26.899 | 30.00th=[14222], 40.00th=[16450], 50.00th=[17433], 60.00th=[19530], 00:10:26.899 | 70.00th=[26870], 80.00th=[35914], 90.00th=[47449], 95.00th=[53740], 00:10:26.899 | 99.00th=[67634], 99.50th=[67634], 99.90th=[70779], 99.95th=[72877], 00:10:26.899 | 99.99th=[79168] 00:10:26.899 write: IOPS=2758, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1009msec); 0 zone resets 00:10:26.899 slat (nsec): min=1591, max=13945k, avg=182408.08, stdev=905135.01 00:10:26.899 clat (usec): min=1187, max=73363, avg=24876.57, stdev=16889.55 00:10:26.899 lat (usec): min=1198, max=73370, avg=25058.97, stdev=17006.20 00:10:26.899 clat percentiles (usec): 00:10:26.899 | 1.00th=[ 3064], 5.00th=[ 5080], 10.00th=[ 5407], 20.00th=[11338], 00:10:26.899 | 30.00th=[12125], 40.00th=[16450], 50.00th=[22938], 60.00th=[26870], 00:10:26.899 | 70.00th=[30540], 80.00th=[36963], 90.00th=[50070], 95.00th=[61080], 00:10:26.899 | 99.00th=[69731], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:10:26.899 | 99.99th=[72877] 00:10:26.899 bw ( KiB/s): min= 8960, max=12288, per=14.71%, avg=10624.00, stdev=2353.25, samples=2 00:10:26.899 iops : min= 2240, max= 3072, avg=2656.00, stdev=588.31, samples=2 00:10:26.899 lat (msec) : 2=0.52%, 4=0.30%, 10=18.57%, 20=32.68%, 50=38.50% 00:10:26.899 lat (msec) : 100=9.43% 00:10:26.899 cpu : usr=2.18%, sys=3.08%, ctx=268, majf=0, minf=1 00:10:26.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:26.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.899 issued rwts: total=2560,2783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.899 job1: (groupid=0, jobs=1): err= 0: pid=1699484: Wed Oct 9 10:50:46 2024 00:10:26.899 read: IOPS=8686, BW=33.9MiB/s (35.6MB/s)(34.0MiB/1002msec) 00:10:26.899 slat (nsec): min=931, max=15783k, avg=62712.77, stdev=473166.61 00:10:26.899 clat (usec): min=3688, max=43888, avg=7909.05, stdev=5608.89 00:10:26.899 lat (usec): min=3709, max=43912, avg=7971.77, stdev=5657.35 00:10:26.899 clat percentiles (usec): 00:10:26.899 | 1.00th=[ 4228], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 5997], 00:10:26.899 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:10:26.899 | 70.00th=[ 6652], 80.00th=[ 7111], 90.00th=[ 8979], 95.00th=[23462], 00:10:26.899 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:26.899 | 99.99th=[43779] 00:10:26.899 write: IOPS=8952, BW=35.0MiB/s (36.7MB/s)(35.0MiB/1002msec); 0 zone resets 00:10:26.899 slat (nsec): min=1546, max=12565k, avg=45376.06, stdev=287306.61 00:10:26.899 clat (usec): min=692, max=32131, avg=6460.20, stdev=2687.27 00:10:26.899 lat (usec): min=2500, max=32153, avg=6505.58, stdev=2705.12 00:10:26.899 clat percentiles (usec): 00:10:26.899 | 1.00th=[ 3458], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 5407], 00:10:26.899 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5997], 00:10:26.899 | 70.00th=[ 6259], 80.00th=[ 6652], 90.00th=[ 8029], 95.00th=[10552], 00:10:26.899 | 99.00th=[17957], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:10:26.899 | 99.99th=[32113] 00:10:26.899 bw ( KiB/s): min=28672, max=42064, per=48.98%, avg=35368.00, stdev=9469.57, samples=2 00:10:26.899 iops : min= 7168, max=10516, avg=8842.00, stdev=2367.39, samples=2 00:10:26.899 lat (usec) : 750=0.01% 00:10:26.899 lat (msec) : 4=2.31%, 10=90.61%, 20=3.99%, 50=3.08% 00:10:26.899 cpu : usr=5.19%, sys=8.89%, ctx=923, majf=0, minf=1 00:10:26.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:26.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.899 issued rwts: total=8704,8970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.899 job2: (groupid=0, jobs=1): err= 0: pid=1699503: Wed Oct 9 10:50:46 2024 00:10:26.899 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:10:26.899 slat (nsec): min=923, max=43761k, avg=136384.06, stdev=1127443.31 00:10:26.899 clat (usec): min=7421, max=59833, avg=16648.86, stdev=9002.49 00:10:26.899 lat (usec): min=7427, max=59839, avg=16785.24, stdev=9063.62 00:10:26.899 clat percentiles (usec): 00:10:26.899 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11731], 00:10:26.899 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14222], 60.00th=[14746], 00:10:26.899 | 70.00th=[15664], 80.00th=[19006], 90.00th=[24249], 95.00th=[35390], 00:10:26.899 | 99.00th=[54789], 99.50th=[56886], 99.90th=[60031], 99.95th=[60031], 00:10:26.899 | 99.99th=[60031] 00:10:26.899 write: IOPS=3844, BW=15.0MiB/s (15.7MB/s)(15.2MiB/1013msec); 0 zone resets 00:10:26.899 slat (nsec): min=1550, max=10325k, avg=118533.03, stdev=618890.51 00:10:26.899 clat (usec): min=717, max=49005, avg=17688.32, stdev=11178.08 00:10:26.899 lat (usec): min=725, max=49012, avg=17806.86, stdev=11253.05 00:10:26.899 clat percentiles (usec): 00:10:26.899 | 1.00th=[ 3458], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 9241], 00:10:26.899 | 30.00th=[10028], 40.00th=[11469], 50.00th=[13173], 60.00th=[16319], 00:10:26.899 | 70.00th=[20055], 80.00th=[30016], 90.00th=[34866], 95.00th=[38536], 00:10:26.899 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:10:26.899 | 99.99th=[49021] 00:10:26.899 bw ( KiB/s): min=11368, max=18768, per=20.87%, avg=15068.00, stdev=5232.59, samples=2 00:10:26.899 iops : min= 2842, max= 4692, avg=3767.00, stdev=1308.15, samples=2 00:10:26.899 lat (usec) : 750=0.04% 00:10:26.899 lat (msec) : 2=0.19%, 4=0.40%, 10=17.61%, 20=58.57%, 50=21.50% 00:10:26.899 lat (msec) : 100=1.68% 00:10:26.899 cpu : usr=2.57%, sys=4.05%, ctx=386, majf=0, minf=1 00:10:26.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:26.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.899 issued rwts: total=3584,3894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.899 job3: (groupid=0, jobs=1): err= 0: pid=1699512: Wed Oct 9 10:50:46 2024 00:10:26.899 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:26.899 slat (usec): min=2, max=20496, avg=209.16, stdev=1251.73 00:10:26.899 clat (msec): min=4, max=102, avg=24.88, stdev=18.57 00:10:26.899 lat (msec): min=4, max=102, avg=25.09, stdev=18.70 00:10:26.899 clat percentiles (msec): 00:10:26.899 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:10:26.899 | 30.00th=[ 9], 40.00th=[ 14], 50.00th=[ 21], 60.00th=[ 28], 00:10:26.899 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 47], 95.00th=[ 64], 00:10:26.899 | 99.00th=[ 99], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:10:26.899 | 99.99th=[ 103] 00:10:26.899 write: IOPS=2628, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1004msec); 0 zone resets 00:10:26.899 slat (nsec): min=1671, max=22643k, avg=166312.92, stdev=1231752.90 00:10:26.899 clat (usec): min=1445, max=101396, avg=23557.60, stdev=21279.95 00:10:26.899 lat (usec): min=1460, max=101403, avg=23723.91, stdev=21374.25 00:10:26.899 clat percentiles (msec): 00:10:26.899 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:10:26.899 | 30.00th=[ 8], 40.00th=[ 12], 50.00th=[ 18], 60.00th=[ 19], 00:10:26.899 | 70.00th=[ 28], 80.00th=[ 41], 90.00th=[ 57], 95.00th=[ 69], 00:10:26.899 | 99.00th=[ 97], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 102], 00:10:26.899 | 99.99th=[ 102] 00:10:26.899 bw ( KiB/s): min= 6720, max=13760, per=14.18%, avg=10240.00, stdev=4978.03, samples=2 00:10:26.899 iops : min= 1680, max= 3440, avg=2560.00, stdev=1244.51, samples=2 00:10:26.899 lat (msec) : 2=0.19%, 4=1.40%, 10=33.89%, 20=20.31%, 50=33.22% 00:10:26.899 lat (msec) : 100=10.79%, 250=0.19% 00:10:26.899 cpu : usr=2.19%, sys=3.09%, ctx=198, majf=0, minf=1 00:10:26.899 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:26.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.899 issued rwts: total=2560,2639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.899 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.899 00:10:26.899 Run status group 0 (all jobs): 00:10:26.899 READ: bw=67.1MiB/s (70.4MB/s), 9.91MiB/s-33.9MiB/s (10.4MB/s-35.6MB/s), io=68.0MiB (71.3MB), run=1002-1013msec 00:10:26.899 WRITE: bw=70.5MiB/s (73.9MB/s), 10.3MiB/s-35.0MiB/s (10.8MB/s-36.7MB/s), io=71.4MiB (74.9MB), run=1002-1013msec 00:10:26.899 00:10:26.899 Disk stats (read/write): 00:10:26.899 nvme0n1: ios=2098/2415, merge=0/0, ticks=26447/41567, in_queue=68014, util=96.09% 00:10:26.899 nvme0n2: ios=7024/7168, merge=0/0, ticks=28379/21813, in_queue=50192, util=98.67% 00:10:26.899 nvme0n3: ios=3072/3271, merge=0/0, ticks=40802/47905, in_queue=88707, util=87.96% 00:10:26.899 nvme0n4: ios=2086/2547, merge=0/0, ticks=18462/20229, in_queue=38691, util=100.00% 00:10:26.899 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:26.899 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1699570 00:10:26.899 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:26.899 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:26.899 [global] 00:10:26.899 thread=1 00:10:26.899 invalidate=1 00:10:26.899 rw=read 00:10:26.899 time_based=1 00:10:26.899 runtime=10 00:10:26.899 ioengine=libaio 00:10:26.899 direct=1 00:10:26.899 bs=4096 00:10:26.899 iodepth=1 00:10:26.899 norandommap=1 00:10:26.899 numjobs=1 00:10:26.899 00:10:26.899 [job0] 00:10:26.899 filename=/dev/nvme0n1 00:10:26.899 [job1] 00:10:26.899 filename=/dev/nvme0n2 00:10:26.899 [job2] 00:10:26.899 filename=/dev/nvme0n3 00:10:26.899 [job3] 00:10:26.899 filename=/dev/nvme0n4 00:10:26.899 Could not set queue depth (nvme0n1) 00:10:26.899 Could not set queue depth (nvme0n2) 00:10:26.899 Could not set queue depth (nvme0n3) 00:10:26.899 Could not set queue depth (nvme0n4) 00:10:27.467 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.467 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.467 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.467 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.467 fio-3.35 00:10:27.467 Starting 4 threads 00:10:30.009 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:30.009 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3203072, buflen=4096 00:10:30.009 fio: pid=1699993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.009 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:30.269 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11190272, buflen=4096 00:10:30.269 fio: pid=1699986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.269 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.269 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:30.529 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:10:30.529 fio: pid=1699951, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.529 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.529 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:30.529 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11468800, buflen=4096 00:10:30.529 fio: pid=1699966, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.529 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.529 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:30.529 00:10:30.529 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1699951: Wed Oct 9 10:50:50 2024 00:10:30.529 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(284KiB/2967msec) 00:10:30.529 slat (usec): min=24, max=5625, avg=105.83, stdev=660.13 00:10:30.529 clat (usec): min=907, max=43055, avg=41369.29, stdev=4884.08 00:10:30.529 lat (usec): min=947, max=43080, avg=41397.38, stdev=4882.69 00:10:30.529 clat percentiles (usec): 00:10:30.530 | 1.00th=[ 906], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:30.530 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:30.530 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:30.530 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:30.530 | 99.99th=[43254] 00:10:30.530 bw ( KiB/s): min= 96, max= 96, per=1.18%, avg=96.00, stdev= 0.00, samples=5 00:10:30.530 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:10:30.530 lat (usec) : 1000=1.39% 00:10:30.530 lat (msec) : 50=97.22% 00:10:30.530 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:10:30.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.530 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1699966: Wed Oct 9 10:50:50 2024 00:10:30.530 read: IOPS=893, BW=3571KiB/s (3657kB/s)(10.9MiB/3136msec) 00:10:30.530 slat (usec): min=6, max=21608, avg=49.17, stdev=616.11 00:10:30.530 clat (usec): min=198, max=42275, avg=1056.34, stdev=2320.61 00:10:30.530 lat (usec): min=225, max=56952, avg=1105.52, stdev=2492.02 00:10:30.530 clat percentiles (usec): 00:10:30.530 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 709], 20.00th=[ 824], 00:10:30.530 | 30.00th=[ 898], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 988], 00:10:30.530 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:30.530 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[42206], 99.95th=[42206], 00:10:30.530 | 99.99th=[42206] 00:10:30.530 bw ( KiB/s): min= 1262, max= 4272, per=45.37%, avg=3695.67, stdev=1193.51, samples=6 00:10:30.530 iops : min= 315, max= 1068, avg=923.83, stdev=298.58, samples=6 00:10:30.530 lat (usec) : 250=0.04%, 500=0.54%, 750=13.92%, 1000=52.98% 00:10:30.530 lat (msec) : 2=32.17%, 50=0.32% 00:10:30.530 cpu : usr=1.88%, sys=3.29%, ctx=2806, majf=0, minf=2 00:10:30.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 issued rwts: total=2801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.530 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1699986: Wed Oct 9 10:50:50 2024 00:10:30.530 read: IOPS=984, BW=3935KiB/s (4030kB/s)(10.7MiB/2777msec) 00:10:30.530 slat (usec): min=7, max=22674, avg=39.28, stdev=517.83 00:10:30.530 clat (usec): min=404, max=42444, avg=961.23, stdev=799.03 00:10:30.530 lat (usec): min=430, max=57305, avg=1000.52, stdev=1165.64 00:10:30.530 clat percentiles (usec): 00:10:30.530 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 881], 00:10:30.530 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:10:30.530 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:10:30.530 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1287], 00:10:30.530 | 99.99th=[42206] 00:10:30.530 bw ( KiB/s): min= 4064, max= 4128, per=50.29%, avg=4096.00, stdev=23.32, samples=5 00:10:30.530 iops : min= 1016, max= 1032, avg=1024.00, stdev= 5.83, samples=5 00:10:30.530 lat (usec) : 500=0.04%, 750=2.49%, 1000=67.95% 00:10:30.530 lat (msec) : 2=29.45%, 50=0.04% 00:10:30.530 cpu : usr=1.04%, sys=2.99%, ctx=2736, majf=0, minf=1 00:10:30.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 issued rwts: total=2733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.530 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1699993: Wed Oct 9 10:50:50 2024 00:10:30.530 read: IOPS=302, BW=1209KiB/s (1238kB/s)(3128KiB/2588msec) 00:10:30.530 slat (nsec): min=7656, max=58567, avg=26021.84, stdev=2841.36 00:10:30.530 clat (usec): min=400, max=42265, avg=3245.82, stdev=9266.45 00:10:30.530 lat (usec): min=430, max=42309, avg=3271.84, stdev=9266.67 00:10:30.530 clat percentiles (usec): 00:10:30.530 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 922], 00:10:30.530 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:10:30.530 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[41157], 00:10:30.530 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.530 | 99.99th=[42206] 00:10:30.530 bw ( KiB/s): min= 96, max= 3872, per=15.32%, avg=1248.00, stdev=1646.38, samples=5 00:10:30.530 iops : min= 24, max= 968, avg=312.00, stdev=411.59, samples=5 00:10:30.530 lat (usec) : 500=0.26%, 750=0.64%, 1000=51.34% 00:10:30.530 lat (msec) : 2=42.02%, 50=5.62% 00:10:30.530 cpu : usr=0.23%, sys=1.01%, ctx=783, majf=0, minf=2 00:10:30.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.530 issued rwts: total=783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.530 00:10:30.530 Run status group 0 (all jobs): 00:10:30.530 READ: bw=8144KiB/s (8340kB/s), 95.7KiB/s-3935KiB/s (98.0kB/s-4030kB/s), io=24.9MiB (26.2MB), run=2588-3136msec 00:10:30.530 00:10:30.530 Disk stats (read/write): 00:10:30.530 nvme0n1: ios=68/0, merge=0/0, ticks=2815/0, in_queue=2815, util=94.76% 00:10:30.530 nvme0n2: ios=2798/0, merge=0/0, ticks=2668/0, in_queue=2668, util=94.11% 00:10:30.530 nvme0n3: ios=2646/0, merge=0/0, ticks=2482/0, in_queue=2482, util=95.99% 00:10:30.530 nvme0n4: ios=783/0, merge=0/0, ticks=2537/0, in_queue=2537, util=96.09% 00:10:30.789 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.790 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:31.049 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.049 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:31.049 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.049 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:31.309 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.309 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1699570 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:31.568 nvmf hotplug test: fio failed as expected 00:10:31.568 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.828 rmmod nvme_tcp 00:10:31.828 rmmod nvme_fabrics 00:10:31.828 rmmod nvme_keyring 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1696030 ']' 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1696030 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1696030 ']' 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1696030 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.828 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1696030 00:10:32.088 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:32.088 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:32.088 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1696030' 00:10:32.088 killing process with pid 1696030 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1696030 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1696030 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.089 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.632 00:10:34.632 real 0m29.375s 00:10:34.632 user 2m34.826s 00:10:34.632 sys 0m9.749s 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.632 ************************************ 00:10:34.632 END TEST nvmf_fio_target 00:10:34.632 ************************************ 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.632 ************************************ 00:10:34.632 START TEST nvmf_bdevio 00:10:34.632 ************************************ 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.632 * Looking for test storage... 00:10:34.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.632 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.633 --rc genhtml_branch_coverage=1 00:10:34.633 --rc genhtml_function_coverage=1 00:10:34.633 --rc genhtml_legend=1 00:10:34.633 --rc geninfo_all_blocks=1 00:10:34.633 --rc geninfo_unexecuted_blocks=1 00:10:34.633 00:10:34.633 ' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.633 --rc genhtml_branch_coverage=1 00:10:34.633 --rc genhtml_function_coverage=1 00:10:34.633 --rc genhtml_legend=1 00:10:34.633 --rc geninfo_all_blocks=1 00:10:34.633 --rc geninfo_unexecuted_blocks=1 00:10:34.633 00:10:34.633 ' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.633 --rc genhtml_branch_coverage=1 00:10:34.633 --rc genhtml_function_coverage=1 00:10:34.633 --rc genhtml_legend=1 00:10:34.633 --rc geninfo_all_blocks=1 00:10:34.633 --rc geninfo_unexecuted_blocks=1 00:10:34.633 00:10:34.633 ' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.633 --rc genhtml_branch_coverage=1 00:10:34.633 --rc genhtml_function_coverage=1 00:10:34.633 --rc genhtml_legend=1 00:10:34.633 --rc geninfo_all_blocks=1 00:10:34.633 --rc geninfo_unexecuted_blocks=1 00:10:34.633 00:10:34.633 ' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.633 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:41.213 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:41.213 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:41.213 Found net devices under 0000:31:00.0: cvl_0_0 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:41.213 Found net devices under 0000:31:00.1: cvl_0_1 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.213 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.214 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:10:41.474 00:10:41.474 --- 10.0.0.2 ping statistics --- 00:10:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.474 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:10:41.474 00:10:41.474 --- 10.0.0.1 ping statistics --- 00:10:41.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.474 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1705216 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1705216 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1705216 ']' 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:41.474 10:51:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:41.474 [2024-10-09 10:51:01.420551] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:10:41.474 [2024-10-09 10:51:01.420645] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.735 [2024-10-09 10:51:01.564331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:41.735 [2024-10-09 10:51:01.613550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.735 [2024-10-09 10:51:01.633084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.735 [2024-10-09 10:51:01.633121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.735 [2024-10-09 10:51:01.633129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.735 [2024-10-09 10:51:01.633136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.735 [2024-10-09 10:51:01.633147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.735 [2024-10-09 10:51:01.634760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:41.735 [2024-10-09 10:51:01.634890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:41.735 [2024-10-09 10:51:01.635042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.735 [2024-10-09 10:51:01.635044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.306 [2024-10-09 10:51:02.294007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.306 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.567 Malloc0 00:10:42.567 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.567 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.567 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.568 [2024-10-09 10:51:02.370761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:42.568 { 00:10:42.568 "params": { 00:10:42.568 "name": "Nvme$subsystem", 00:10:42.568 "trtype": "$TEST_TRANSPORT", 00:10:42.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:42.568 "adrfam": "ipv4", 00:10:42.568 "trsvcid": "$NVMF_PORT", 00:10:42.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:42.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:42.568 "hdgst": ${hdgst:-false}, 00:10:42.568 "ddgst": ${ddgst:-false} 00:10:42.568 }, 00:10:42.568 "method": "bdev_nvme_attach_controller" 00:10:42.568 } 00:10:42.568 EOF 00:10:42.568 )") 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:42.568 10:51:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:42.568 "params": { 00:10:42.568 "name": "Nvme1", 00:10:42.568 "trtype": "tcp", 00:10:42.568 "traddr": "10.0.0.2", 00:10:42.568 "adrfam": "ipv4", 00:10:42.568 "trsvcid": "4420", 00:10:42.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:42.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:42.568 "hdgst": false, 00:10:42.568 "ddgst": false 00:10:42.568 }, 00:10:42.568 "method": "bdev_nvme_attach_controller" 00:10:42.568 }' 00:10:42.568 [2024-10-09 10:51:02.425372] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:10:42.568 [2024-10-09 10:51:02.425442] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705327 ] 00:10:42.568 [2024-10-09 10:51:02.560205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:42.829 [2024-10-09 10:51:02.594664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.829 [2024-10-09 10:51:02.620852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.829 [2024-10-09 10:51:02.620974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.829 [2024-10-09 10:51:02.620977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.089 I/O targets: 00:10:43.089 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:43.089 00:10:43.089 00:10:43.089 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.089 http://cunit.sourceforge.net/ 00:10:43.089 00:10:43.089 00:10:43.089 Suite: bdevio tests on: Nvme1n1 00:10:43.089 Test: blockdev write read block ...passed 00:10:43.089 Test: blockdev write zeroes read block ...passed 00:10:43.089 Test: blockdev write zeroes read no split ...passed 00:10:43.089 Test: blockdev write zeroes read split ...passed 00:10:43.089 Test: blockdev write zeroes read split partial ...passed 00:10:43.089 Test: blockdev reset ...[2024-10-09 10:51:03.078700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:43.089 [2024-10-09 10:51:03.078770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1186590 (9): Bad file descriptor 00:10:43.349 [2024-10-09 10:51:03.187462] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:43.349 passed 00:10:43.349 Test: blockdev write read 8 blocks ...passed 00:10:43.349 Test: blockdev write read size > 128k ...passed 00:10:43.349 Test: blockdev write read invalid size ...passed 00:10:43.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:43.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:43.349 Test: blockdev write read max offset ...passed 00:10:43.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:43.349 Test: blockdev writev readv 8 blocks ...passed 00:10:43.349 Test: blockdev writev readv 30 x 1block ...passed 00:10:43.614 Test: blockdev writev readv block ...passed 00:10:43.614 Test: blockdev writev readv size > 128k ...passed 00:10:43.614 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:43.614 Test: blockdev comparev and writev ...[2024-10-09 10:51:03.371065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.371090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.371111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.371618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.371626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.371636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.371641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.372135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.372142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.372152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.372157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.372625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.372633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.372642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:43.614 [2024-10-09 10:51:03.372647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:43.614 passed 00:10:43.614 Test: blockdev nvme passthru rw ...passed 00:10:43.614 Test: blockdev nvme passthru vendor specific ...[2024-10-09 10:51:03.457380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.614 [2024-10-09 10:51:03.457390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.457723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.614 [2024-10-09 10:51:03.457730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.458090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.614 [2024-10-09 10:51:03.458097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:43.614 [2024-10-09 10:51:03.458431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:43.614 [2024-10-09 10:51:03.458438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:43.614 passed 00:10:43.614 Test: blockdev nvme admin passthru ...passed 00:10:43.614 Test: blockdev copy ...passed 00:10:43.614 00:10:43.614 Run Summary: Type Total Ran Passed Failed Inactive 00:10:43.614 suites 1 1 n/a 0 0 00:10:43.614 tests 23 23 23 0 0 00:10:43.614 asserts 152 152 152 0 n/a 00:10:43.614 00:10:43.614 Elapsed time = 1.189 seconds 00:10:43.614 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.614 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.614 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.614 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:43.615 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:43.615 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:43.615 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.875 rmmod nvme_tcp 00:10:43.875 rmmod nvme_fabrics 00:10:43.875 rmmod nvme_keyring 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1705216 ']' 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1705216 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1705216 ']' 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1705216 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1705216 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1705216' 00:10:43.875 killing process with pid 1705216 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1705216 00:10:43.875 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1705216 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.135 10:51:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.045 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.045 00:10:46.045 real 0m11.839s 00:10:46.045 user 0m13.303s 00:10:46.045 sys 0m5.896s 00:10:46.045 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.045 10:51:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.045 ************************************ 00:10:46.045 END TEST nvmf_bdevio 00:10:46.045 ************************************ 00:10:46.045 10:51:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:46.045 00:10:46.045 real 5m4.038s 00:10:46.045 user 11m48.401s 00:10:46.045 sys 1m49.415s 00:10:46.045 10:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.046 10:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:46.046 ************************************ 00:10:46.046 END TEST nvmf_target_core 00:10:46.046 ************************************ 00:10:46.344 10:51:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:46.344 10:51:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.344 10:51:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.344 10:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.344 ************************************ 00:10:46.344 START TEST nvmf_target_extra 00:10:46.344 ************************************ 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:46.344 * Looking for test storage... 00:10:46.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.344 --rc genhtml_branch_coverage=1 00:10:46.344 --rc genhtml_function_coverage=1 00:10:46.344 --rc genhtml_legend=1 00:10:46.344 --rc geninfo_all_blocks=1 00:10:46.344 --rc geninfo_unexecuted_blocks=1 00:10:46.344 00:10:46.344 ' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.344 --rc genhtml_branch_coverage=1 00:10:46.344 --rc genhtml_function_coverage=1 00:10:46.344 --rc genhtml_legend=1 00:10:46.344 --rc geninfo_all_blocks=1 00:10:46.344 --rc geninfo_unexecuted_blocks=1 00:10:46.344 00:10:46.344 ' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.344 --rc genhtml_branch_coverage=1 00:10:46.344 --rc genhtml_function_coverage=1 00:10:46.344 --rc genhtml_legend=1 00:10:46.344 --rc geninfo_all_blocks=1 00:10:46.344 --rc geninfo_unexecuted_blocks=1 00:10:46.344 00:10:46.344 ' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.344 --rc genhtml_branch_coverage=1 00:10:46.344 --rc genhtml_function_coverage=1 00:10:46.344 --rc genhtml_legend=1 00:10:46.344 --rc geninfo_all_blocks=1 00:10:46.344 --rc geninfo_unexecuted_blocks=1 00:10:46.344 00:10:46.344 ' 00:10:46.344 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.345 10:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.641 ************************************ 00:10:46.641 START TEST nvmf_example 00:10:46.641 ************************************ 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:46.641 * Looking for test storage... 00:10:46.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:46.641 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.642 --rc genhtml_branch_coverage=1 00:10:46.642 --rc genhtml_function_coverage=1 00:10:46.642 --rc genhtml_legend=1 00:10:46.642 --rc geninfo_all_blocks=1 00:10:46.642 --rc geninfo_unexecuted_blocks=1 00:10:46.642 00:10:46.642 ' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.642 --rc genhtml_branch_coverage=1 00:10:46.642 --rc genhtml_function_coverage=1 00:10:46.642 --rc genhtml_legend=1 00:10:46.642 --rc geninfo_all_blocks=1 00:10:46.642 --rc geninfo_unexecuted_blocks=1 00:10:46.642 00:10:46.642 ' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.642 --rc genhtml_branch_coverage=1 00:10:46.642 --rc genhtml_function_coverage=1 00:10:46.642 --rc genhtml_legend=1 00:10:46.642 --rc geninfo_all_blocks=1 00:10:46.642 --rc geninfo_unexecuted_blocks=1 00:10:46.642 00:10:46.642 ' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.642 --rc genhtml_branch_coverage=1 00:10:46.642 --rc genhtml_function_coverage=1 00:10:46.642 --rc genhtml_legend=1 00:10:46.642 --rc geninfo_all_blocks=1 00:10:46.642 --rc geninfo_unexecuted_blocks=1 00:10:46.642 00:10:46.642 ' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.642 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.933 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:46.933 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:46.933 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.933 10:51:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:55.069 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:55.069 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:55.069 Found net devices under 0000:31:00.0: cvl_0_0 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:55.069 Found net devices under 0000:31:00.1: cvl_0_1 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.069 10:51:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:10:55.069 00:10:55.069 --- 10.0.0.2 ping statistics --- 00:10:55.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.069 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:10:55.069 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:10:55.069 00:10:55.069 --- 10.0.0.1 ping statistics --- 00:10:55.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.070 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1710546 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1710546 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1710546 ']' 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.070 10:51:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:55.331 10:51:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:07.556 Initializing NVMe Controllers 00:11:07.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:07.556 Initialization complete. Launching workers. 00:11:07.556 ======================================================== 00:11:07.556 Latency(us) 00:11:07.556 Device Information : IOPS MiB/s Average min max 00:11:07.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18300.28 71.49 3496.64 590.81 15340.32 00:11:07.556 ======================================================== 00:11:07.556 Total : 18300.28 71.49 3496.64 590.81 15340.32 00:11:07.556 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.556 rmmod nvme_tcp 00:11:07.556 rmmod nvme_fabrics 00:11:07.556 rmmod nvme_keyring 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1710546 ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1710546 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1710546 ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1710546 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1710546 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1710546' 00:11:07.556 killing process with pid 1710546 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1710546 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1710546 00:11:07.556 nvmf threads initialize successfully 00:11:07.556 bdev subsystem init successfully 00:11:07.556 created a nvmf target service 00:11:07.556 create targets's poll groups done 00:11:07.556 all subsystems of target started 00:11:07.556 nvmf target is running 00:11:07.556 all subsystems of target stopped 00:11:07.556 destroy targets's poll groups done 00:11:07.556 destroyed the nvmf target service 00:11:07.556 bdev subsystem finish successfully 00:11:07.556 nvmf threads destroy successfully 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.556 10:51:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.817 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.077 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:08.077 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.077 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.077 00:11:08.077 real 0m21.484s 00:11:08.077 user 0m46.712s 00:11:08.077 sys 0m6.877s 00:11:08.077 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.078 ************************************ 00:11:08.078 END TEST nvmf_example 00:11:08.078 ************************************ 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.078 ************************************ 00:11:08.078 START TEST nvmf_filesystem 00:11:08.078 ************************************ 00:11:08.078 10:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.078 * Looking for test storage... 00:11:08.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.078 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.078 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.078 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.345 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.346 --rc genhtml_branch_coverage=1 00:11:08.346 --rc genhtml_function_coverage=1 00:11:08.346 --rc genhtml_legend=1 00:11:08.346 --rc geninfo_all_blocks=1 00:11:08.346 --rc geninfo_unexecuted_blocks=1 00:11:08.346 00:11:08.346 ' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.346 --rc genhtml_branch_coverage=1 00:11:08.346 --rc genhtml_function_coverage=1 00:11:08.346 --rc genhtml_legend=1 00:11:08.346 --rc geninfo_all_blocks=1 00:11:08.346 --rc geninfo_unexecuted_blocks=1 00:11:08.346 00:11:08.346 ' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.346 --rc genhtml_branch_coverage=1 00:11:08.346 --rc genhtml_function_coverage=1 00:11:08.346 --rc genhtml_legend=1 00:11:08.346 --rc geninfo_all_blocks=1 00:11:08.346 --rc geninfo_unexecuted_blocks=1 00:11:08.346 00:11:08.346 ' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.346 --rc genhtml_branch_coverage=1 00:11:08.346 --rc genhtml_function_coverage=1 00:11:08.346 --rc genhtml_legend=1 00:11:08.346 --rc geninfo_all_blocks=1 00:11:08.346 --rc geninfo_unexecuted_blocks=1 00:11:08.346 00:11:08.346 ' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:08.346 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:08.347 #define SPDK_CONFIG_H 00:11:08.347 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:08.347 #define SPDK_CONFIG_APPS 1 00:11:08.347 #define SPDK_CONFIG_ARCH native 00:11:08.347 #undef SPDK_CONFIG_ASAN 00:11:08.347 #undef SPDK_CONFIG_AVAHI 00:11:08.347 #undef SPDK_CONFIG_CET 00:11:08.347 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:08.347 #define SPDK_CONFIG_COVERAGE 1 00:11:08.347 #define SPDK_CONFIG_CROSS_PREFIX 00:11:08.347 #undef SPDK_CONFIG_CRYPTO 00:11:08.347 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:08.347 #undef SPDK_CONFIG_CUSTOMOCF 00:11:08.347 #undef SPDK_CONFIG_DAOS 00:11:08.347 #define SPDK_CONFIG_DAOS_DIR 00:11:08.347 #define SPDK_CONFIG_DEBUG 1 00:11:08.347 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:08.347 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:08.347 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:08.347 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:08.347 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:08.347 #undef SPDK_CONFIG_DPDK_UADK 00:11:08.347 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:08.347 #define SPDK_CONFIG_EXAMPLES 1 00:11:08.347 #undef SPDK_CONFIG_FC 00:11:08.347 #define SPDK_CONFIG_FC_PATH 00:11:08.347 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:08.347 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:08.347 #define SPDK_CONFIG_FSDEV 1 00:11:08.347 #undef SPDK_CONFIG_FUSE 00:11:08.347 #undef SPDK_CONFIG_FUZZER 00:11:08.347 #define SPDK_CONFIG_FUZZER_LIB 00:11:08.347 #undef SPDK_CONFIG_GOLANG 00:11:08.347 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:08.347 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:08.347 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:08.347 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:08.347 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:08.347 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:08.347 #undef SPDK_CONFIG_HAVE_LZ4 00:11:08.347 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:08.347 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:08.347 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:08.347 #define SPDK_CONFIG_IDXD 1 00:11:08.347 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:08.347 #undef SPDK_CONFIG_IPSEC_MB 00:11:08.347 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:08.347 #define SPDK_CONFIG_ISAL 1 00:11:08.347 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:08.347 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:08.347 #define SPDK_CONFIG_LIBDIR 00:11:08.347 #undef SPDK_CONFIG_LTO 00:11:08.347 #define SPDK_CONFIG_MAX_LCORES 128 00:11:08.347 #define SPDK_CONFIG_NVME_CUSE 1 00:11:08.347 #undef SPDK_CONFIG_OCF 00:11:08.347 #define SPDK_CONFIG_OCF_PATH 00:11:08.347 #define SPDK_CONFIG_OPENSSL_PATH 00:11:08.347 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:08.347 #define SPDK_CONFIG_PGO_DIR 00:11:08.347 #undef SPDK_CONFIG_PGO_USE 00:11:08.347 #define SPDK_CONFIG_PREFIX /usr/local 00:11:08.347 #undef SPDK_CONFIG_RAID5F 00:11:08.347 #undef SPDK_CONFIG_RBD 00:11:08.347 #define SPDK_CONFIG_RDMA 1 00:11:08.347 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:08.347 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:08.347 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:08.347 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:08.347 #define SPDK_CONFIG_SHARED 1 00:11:08.347 #undef SPDK_CONFIG_SMA 00:11:08.347 #define SPDK_CONFIG_TESTS 1 00:11:08.347 #undef SPDK_CONFIG_TSAN 00:11:08.347 #define SPDK_CONFIG_UBLK 1 00:11:08.347 #define SPDK_CONFIG_UBSAN 1 00:11:08.347 #undef SPDK_CONFIG_UNIT_TESTS 00:11:08.347 #undef SPDK_CONFIG_URING 00:11:08.347 #define SPDK_CONFIG_URING_PATH 00:11:08.347 #undef SPDK_CONFIG_URING_ZNS 00:11:08.347 #undef SPDK_CONFIG_USDT 00:11:08.347 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:08.347 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:08.347 #define SPDK_CONFIG_VFIO_USER 1 00:11:08.347 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:08.347 #define SPDK_CONFIG_VHOST 1 00:11:08.347 #define SPDK_CONFIG_VIRTIO 1 00:11:08.347 #undef SPDK_CONFIG_VTUNE 00:11:08.347 #define SPDK_CONFIG_VTUNE_DIR 00:11:08.347 #define SPDK_CONFIG_WERROR 1 00:11:08.347 #define SPDK_CONFIG_WPDK_DIR 00:11:08.347 #undef SPDK_CONFIG_XNVME 00:11:08.347 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.347 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : main 00:11:08.348 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:08.349 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1713342 ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1713342 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.eBOzD6 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eBOzD6/tests/target /tmp/spdk.eBOzD6 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=121897271296 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356525568 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7459254272 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668229632 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678260736 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847889920 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677912576 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=352256 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:08.350 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:08.350 * Looking for test storage... 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=121897271296 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9673846784 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.351 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.612 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.613 --rc genhtml_branch_coverage=1 00:11:08.613 --rc genhtml_function_coverage=1 00:11:08.613 --rc genhtml_legend=1 00:11:08.613 --rc geninfo_all_blocks=1 00:11:08.613 --rc geninfo_unexecuted_blocks=1 00:11:08.613 00:11:08.613 ' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.613 --rc genhtml_branch_coverage=1 00:11:08.613 --rc genhtml_function_coverage=1 00:11:08.613 --rc genhtml_legend=1 00:11:08.613 --rc geninfo_all_blocks=1 00:11:08.613 --rc geninfo_unexecuted_blocks=1 00:11:08.613 00:11:08.613 ' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.613 --rc genhtml_branch_coverage=1 00:11:08.613 --rc genhtml_function_coverage=1 00:11:08.613 --rc genhtml_legend=1 00:11:08.613 --rc geninfo_all_blocks=1 00:11:08.613 --rc geninfo_unexecuted_blocks=1 00:11:08.613 00:11:08.613 ' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.613 --rc genhtml_branch_coverage=1 00:11:08.613 --rc genhtml_function_coverage=1 00:11:08.613 --rc genhtml_legend=1 00:11:08.613 --rc geninfo_all_blocks=1 00:11:08.613 --rc geninfo_unexecuted_blocks=1 00:11:08.613 00:11:08.613 ' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.613 10:51:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.758 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:16.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:16.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:16.759 Found net devices under 0000:31:00.0: cvl_0_0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:16.759 Found net devices under 0000:31:00.1: cvl_0_1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:16.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:11:16.759 00:11:16.759 --- 10.0.0.2 ping statistics --- 00:11:16.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.759 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:11:16.759 00:11:16.759 --- 10.0.0.1 ping statistics --- 00:11:16.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.759 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:16.759 ************************************ 00:11:16.759 START TEST nvmf_filesystem_no_in_capsule 00:11:16.759 ************************************ 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1717355 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1717355 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1717355 ']' 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.759 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.760 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.760 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.760 10:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.760 [2024-10-09 10:51:35.984666] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:11:16.760 [2024-10-09 10:51:35.984723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.760 [2024-10-09 10:51:36.123409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:16.760 [2024-10-09 10:51:36.154398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.760 [2024-10-09 10:51:36.172394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.760 [2024-10-09 10:51:36.172425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.760 [2024-10-09 10:51:36.172436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.760 [2024-10-09 10:51:36.172443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.760 [2024-10-09 10:51:36.172449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.760 [2024-10-09 10:51:36.174110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.760 [2024-10-09 10:51:36.174224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.760 [2024-10-09 10:51:36.174376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.760 [2024-10-09 10:51:36.174377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.020 [2024-10-09 10:51:36.835660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.020 Malloc1 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.020 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.021 [2024-10-09 10:51:36.963005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:17.021 { 00:11:17.021 "name": "Malloc1", 00:11:17.021 "aliases": [ 00:11:17.021 "5b5b40ac-c1c9-4840-b9c5-b5bb4d20c424" 00:11:17.021 ], 00:11:17.021 "product_name": "Malloc disk", 00:11:17.021 "block_size": 512, 00:11:17.021 "num_blocks": 1048576, 00:11:17.021 "uuid": "5b5b40ac-c1c9-4840-b9c5-b5bb4d20c424", 00:11:17.021 "assigned_rate_limits": { 00:11:17.021 "rw_ios_per_sec": 0, 00:11:17.021 "rw_mbytes_per_sec": 0, 00:11:17.021 "r_mbytes_per_sec": 0, 00:11:17.021 "w_mbytes_per_sec": 0 00:11:17.021 }, 00:11:17.021 "claimed": true, 00:11:17.021 "claim_type": "exclusive_write", 00:11:17.021 "zoned": false, 00:11:17.021 "supported_io_types": { 00:11:17.021 "read": true, 00:11:17.021 "write": true, 00:11:17.021 "unmap": true, 00:11:17.021 "flush": true, 00:11:17.021 "reset": true, 00:11:17.021 "nvme_admin": false, 00:11:17.021 "nvme_io": false, 00:11:17.021 "nvme_io_md": false, 00:11:17.021 "write_zeroes": true, 00:11:17.021 "zcopy": true, 00:11:17.021 "get_zone_info": false, 00:11:17.021 "zone_management": false, 00:11:17.021 "zone_append": false, 00:11:17.021 "compare": false, 00:11:17.021 "compare_and_write": false, 00:11:17.021 "abort": true, 00:11:17.021 "seek_hole": false, 00:11:17.021 "seek_data": false, 00:11:17.021 "copy": true, 00:11:17.021 "nvme_iov_md": false 00:11:17.021 }, 00:11:17.021 "memory_domains": [ 00:11:17.021 { 00:11:17.021 "dma_device_id": "system", 00:11:17.021 "dma_device_type": 1 00:11:17.021 }, 00:11:17.021 { 00:11:17.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.021 "dma_device_type": 2 00:11:17.021 } 00:11:17.021 ], 00:11:17.021 "driver_specific": {} 00:11:17.021 } 00:11:17.021 ]' 00:11:17.021 10:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.280 10:51:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.660 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.660 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.660 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.661 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:18.661 10:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:21.202 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:21.203 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:21.203 10:51:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:21.462 10:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:22.399 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:22.399 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:22.399 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:22.399 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.400 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.659 ************************************ 00:11:22.659 START TEST filesystem_ext4 00:11:22.659 ************************************ 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:22.659 10:51:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:22.659 mke2fs 1.47.0 (5-Feb-2023) 00:11:22.659 Discarding device blocks: 0/522240 done 00:11:22.659 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.659 Filesystem UUID: 2573560b-b02a-4e94-9287-155def987553 00:11:22.659 Superblock backups stored on blocks: 00:11:22.659 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.659 00:11:22.659 Allocating group tables: 0/64 done 00:11:22.659 Writing inode tables: 0/64 done 00:11:23.597 Creating journal (8192 blocks): done 00:11:23.597 Writing superblocks and filesystem accounting information: 0/64 done 00:11:23.597 00:11:23.597 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:23.597 10:51:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.173 10:51:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1717355 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.173 00:11:30.173 real 0m6.693s 00:11:30.173 user 0m0.033s 00:11:30.173 sys 0m0.076s 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 ************************************ 00:11:30.173 END TEST filesystem_ext4 00:11:30.173 ************************************ 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.173 ************************************ 00:11:30.173 START TEST filesystem_btrfs 00:11:30.173 ************************************ 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:30.173 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:30.173 btrfs-progs v6.8.1 00:11:30.173 See https://btrfs.readthedocs.io for more information. 00:11:30.173 00:11:30.173 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:30.173 NOTE: several default settings have changed in version 5.15, please make sure 00:11:30.173 this does not affect your deployments: 00:11:30.173 - DUP for metadata (-m dup) 00:11:30.173 - enabled no-holes (-O no-holes) 00:11:30.173 - enabled free-space-tree (-R free-space-tree) 00:11:30.173 00:11:30.173 Label: (null) 00:11:30.173 UUID: 4ab719aa-6e5f-4f39-a53e-71bd6e2e7255 00:11:30.173 Node size: 16384 00:11:30.173 Sector size: 4096 (CPU page size: 4096) 00:11:30.173 Filesystem size: 510.00MiB 00:11:30.173 Block group profiles: 00:11:30.174 Data: single 8.00MiB 00:11:30.174 Metadata: DUP 32.00MiB 00:11:30.174 System: DUP 8.00MiB 00:11:30.174 SSD detected: yes 00:11:30.174 Zoned device: no 00:11:30.174 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:30.174 Checksum: crc32c 00:11:30.174 Number of devices: 1 00:11:30.174 Devices: 00:11:30.174 ID SIZE PATH 00:11:30.174 1 510.00MiB /dev/nvme0n1p1 00:11:30.174 00:11:30.174 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:30.174 10:51:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.433 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.433 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:30.433 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.433 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1717355 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.434 00:11:30.434 real 0m1.158s 00:11:30.434 user 0m0.032s 00:11:30.434 sys 0m0.116s 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.434 ************************************ 00:11:30.434 END TEST filesystem_btrfs 00:11:30.434 ************************************ 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.434 ************************************ 00:11:30.434 START TEST filesystem_xfs 00:11:30.434 ************************************ 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:30.434 10:51:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:30.693 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:30.693 = sectsz=512 attr=2, projid32bit=1 00:11:30.693 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:30.693 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:30.693 data = bsize=4096 blocks=130560, imaxpct=25 00:11:30.693 = sunit=0 swidth=0 blks 00:11:30.693 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:30.694 log =internal log bsize=4096 blocks=16384, version=2 00:11:30.694 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:30.694 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:31.636 Discarding blocks...Done. 00:11:31.636 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:31.636 10:51:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1717355 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.549 00:11:33.549 real 0m2.934s 00:11:33.549 user 0m0.032s 00:11:33.549 sys 0m0.073s 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.549 ************************************ 00:11:33.549 END TEST filesystem_xfs 00:11:33.549 ************************************ 00:11:33.549 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:33.809 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1717355 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1717355 ']' 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1717355 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1717355 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1717355' 00:11:34.069 killing process with pid 1717355 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1717355 00:11:34.069 10:51:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1717355 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:34.329 00:11:34.329 real 0m18.175s 00:11:34.329 user 1m11.655s 00:11:34.329 sys 0m1.391s 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.329 ************************************ 00:11:34.329 END TEST nvmf_filesystem_no_in_capsule 00:11:34.329 ************************************ 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.329 ************************************ 00:11:34.329 START TEST nvmf_filesystem_in_capsule 00:11:34.329 ************************************ 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:34.329 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1720957 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1720957 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1720957 ']' 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.330 10:51:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.330 [2024-10-09 10:51:54.254058] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:11:34.330 [2024-10-09 10:51:54.254114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.590 [2024-10-09 10:51:54.395690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.590 [2024-10-09 10:51:54.426405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.590 [2024-10-09 10:51:54.444257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.590 [2024-10-09 10:51:54.444289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.590 [2024-10-09 10:51:54.444297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.590 [2024-10-09 10:51:54.444303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.590 [2024-10-09 10:51:54.444312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.590 [2024-10-09 10:51:54.446056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.590 [2024-10-09 10:51:54.446170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.590 [2024-10-09 10:51:54.446325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.590 [2024-10-09 10:51:54.446325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.159 [2024-10-09 10:51:55.111826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.159 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 Malloc1 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 [2024-10-09 10:51:55.233814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.419 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:35.419 { 00:11:35.419 "name": "Malloc1", 00:11:35.419 "aliases": [ 00:11:35.419 "52dd1de6-89d9-45f8-932b-738ff6fc80cb" 00:11:35.419 ], 00:11:35.419 "product_name": "Malloc disk", 00:11:35.419 "block_size": 512, 00:11:35.419 "num_blocks": 1048576, 00:11:35.420 "uuid": "52dd1de6-89d9-45f8-932b-738ff6fc80cb", 00:11:35.420 "assigned_rate_limits": { 00:11:35.420 "rw_ios_per_sec": 0, 00:11:35.420 "rw_mbytes_per_sec": 0, 00:11:35.420 "r_mbytes_per_sec": 0, 00:11:35.420 "w_mbytes_per_sec": 0 00:11:35.420 }, 00:11:35.420 "claimed": true, 00:11:35.420 "claim_type": "exclusive_write", 00:11:35.420 "zoned": false, 00:11:35.420 "supported_io_types": { 00:11:35.420 "read": true, 00:11:35.420 "write": true, 00:11:35.420 "unmap": true, 00:11:35.420 "flush": true, 00:11:35.420 "reset": true, 00:11:35.420 "nvme_admin": false, 00:11:35.420 "nvme_io": false, 00:11:35.420 "nvme_io_md": false, 00:11:35.420 "write_zeroes": true, 00:11:35.420 "zcopy": true, 00:11:35.420 "get_zone_info": false, 00:11:35.420 "zone_management": false, 00:11:35.420 "zone_append": false, 00:11:35.420 "compare": false, 00:11:35.420 "compare_and_write": false, 00:11:35.420 "abort": true, 00:11:35.420 "seek_hole": false, 00:11:35.420 "seek_data": false, 00:11:35.420 "copy": true, 00:11:35.420 "nvme_iov_md": false 00:11:35.420 }, 00:11:35.420 "memory_domains": [ 00:11:35.420 { 00:11:35.420 "dma_device_id": "system", 00:11:35.420 "dma_device_type": 1 00:11:35.420 }, 00:11:35.420 { 00:11:35.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.420 "dma_device_type": 2 00:11:35.420 } 00:11:35.420 ], 00:11:35.420 "driver_specific": {} 00:11:35.420 } 00:11:35.420 ]' 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.420 10:51:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.331 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.331 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.331 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.331 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.331 10:51:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.241 10:51:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.502 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.072 10:51:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.014 ************************************ 00:11:41.014 START TEST filesystem_in_capsule_ext4 00:11:41.014 ************************************ 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:41.014 10:52:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.014 mke2fs 1.47.0 (5-Feb-2023) 00:11:41.014 Discarding device blocks: 0/522240 done 00:11:41.014 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.014 Filesystem UUID: 405bef89-e3d1-46c8-b00b-88c66861f9f0 00:11:41.014 Superblock backups stored on blocks: 00:11:41.014 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.014 00:11:41.014 Allocating group tables: 0/64 done 00:11:41.014 Writing inode tables: 0/64 done 00:11:41.274 Creating journal (8192 blocks): done 00:11:42.654 Writing superblocks and filesystem accounting information: 0/64 done 00:11:42.654 00:11:42.654 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:42.654 10:52:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1720957 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.233 00:11:49.233 real 0m7.920s 00:11:49.233 user 0m0.030s 00:11:49.233 sys 0m0.077s 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:49.233 ************************************ 00:11:49.233 END TEST filesystem_in_capsule_ext4 00:11:49.233 ************************************ 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.233 ************************************ 00:11:49.233 START TEST filesystem_in_capsule_btrfs 00:11:49.233 ************************************ 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:49.233 10:52:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:49.233 btrfs-progs v6.8.1 00:11:49.233 See https://btrfs.readthedocs.io for more information. 00:11:49.233 00:11:49.233 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:49.233 NOTE: several default settings have changed in version 5.15, please make sure 00:11:49.233 this does not affect your deployments: 00:11:49.233 - DUP for metadata (-m dup) 00:11:49.233 - enabled no-holes (-O no-holes) 00:11:49.233 - enabled free-space-tree (-R free-space-tree) 00:11:49.233 00:11:49.233 Label: (null) 00:11:49.233 UUID: 3e399e49-9ab7-4e7c-8ae6-a05acf6b058f 00:11:49.233 Node size: 16384 00:11:49.233 Sector size: 4096 (CPU page size: 4096) 00:11:49.233 Filesystem size: 510.00MiB 00:11:49.233 Block group profiles: 00:11:49.233 Data: single 8.00MiB 00:11:49.233 Metadata: DUP 32.00MiB 00:11:49.233 System: DUP 8.00MiB 00:11:49.233 SSD detected: yes 00:11:49.233 Zoned device: no 00:11:49.233 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:49.233 Checksum: crc32c 00:11:49.233 Number of devices: 1 00:11:49.233 Devices: 00:11:49.233 ID SIZE PATH 00:11:49.233 1 510.00MiB /dev/nvme0n1p1 00:11:49.233 00:11:49.233 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:49.233 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1720957 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.804 00:11:49.804 real 0m0.820s 00:11:49.804 user 0m0.034s 00:11:49.804 sys 0m0.117s 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.804 ************************************ 00:11:49.804 END TEST filesystem_in_capsule_btrfs 00:11:49.804 ************************************ 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.804 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.064 ************************************ 00:11:50.064 START TEST filesystem_in_capsule_xfs 00:11:50.064 ************************************ 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:50.064 10:52:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:50.064 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:50.064 = sectsz=512 attr=2, projid32bit=1 00:11:50.064 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:50.064 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:50.064 data = bsize=4096 blocks=130560, imaxpct=25 00:11:50.064 = sunit=0 swidth=0 blks 00:11:50.064 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:50.064 log =internal log bsize=4096 blocks=16384, version=2 00:11:50.064 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:50.064 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:51.004 Discarding blocks...Done. 00:11:51.004 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:51.004 10:52:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1720957 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.547 00:11:53.547 real 0m3.426s 00:11:53.547 user 0m0.031s 00:11:53.547 sys 0m0.076s 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.547 ************************************ 00:11:53.547 END TEST filesystem_in_capsule_xfs 00:11:53.547 ************************************ 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1720957 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1720957 ']' 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1720957 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.547 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1720957 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1720957' 00:11:53.844 killing process with pid 1720957 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1720957 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1720957 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:53.844 00:11:53.844 real 0m19.580s 00:11:53.844 user 1m17.191s 00:11:53.844 sys 0m1.427s 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.844 ************************************ 00:11:53.844 END TEST nvmf_filesystem_in_capsule 00:11:53.844 ************************************ 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.844 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.844 rmmod nvme_tcp 00:11:53.844 rmmod nvme_fabrics 00:11:54.161 rmmod nvme_keyring 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.161 10:52:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.098 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.098 00:11:56.098 real 0m48.028s 00:11:56.098 user 2m31.170s 00:11:56.098 sys 0m8.705s 00:11:56.098 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.098 10:52:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.098 ************************************ 00:11:56.098 END TEST nvmf_filesystem 00:11:56.098 ************************************ 00:11:56.098 10:52:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.098 10:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:56.098 10:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.098 10:52:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.098 ************************************ 00:11:56.098 START TEST nvmf_target_discovery 00:11:56.098 ************************************ 00:11:56.098 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:56.358 * Looking for test storage... 00:11:56.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.358 --rc genhtml_branch_coverage=1 00:11:56.358 --rc genhtml_function_coverage=1 00:11:56.358 --rc genhtml_legend=1 00:11:56.358 --rc geninfo_all_blocks=1 00:11:56.358 --rc geninfo_unexecuted_blocks=1 00:11:56.358 00:11:56.358 ' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.358 --rc genhtml_branch_coverage=1 00:11:56.358 --rc genhtml_function_coverage=1 00:11:56.358 --rc genhtml_legend=1 00:11:56.358 --rc geninfo_all_blocks=1 00:11:56.358 --rc geninfo_unexecuted_blocks=1 00:11:56.358 00:11:56.358 ' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.358 --rc genhtml_branch_coverage=1 00:11:56.358 --rc genhtml_function_coverage=1 00:11:56.358 --rc genhtml_legend=1 00:11:56.358 --rc geninfo_all_blocks=1 00:11:56.358 --rc geninfo_unexecuted_blocks=1 00:11:56.358 00:11:56.358 ' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.358 --rc genhtml_branch_coverage=1 00:11:56.358 --rc genhtml_function_coverage=1 00:11:56.358 --rc genhtml_legend=1 00:11:56.358 --rc geninfo_all_blocks=1 00:11:56.358 --rc geninfo_unexecuted_blocks=1 00:11:56.358 00:11:56.358 ' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.358 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.359 10:52:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.502 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:04.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:04.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:04.503 Found net devices under 0000:31:00.0: cvl_0_0 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:04.503 Found net devices under 0000:31:00.1: cvl_0_1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:12:04.503 00:12:04.503 --- 10.0.0.2 ping statistics --- 00:12:04.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.503 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:12:04.503 00:12:04.503 --- 10.0.0.1 ping statistics --- 00:12:04.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.503 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1729273 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1729273 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1729273 ']' 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.503 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.504 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.504 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.504 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.504 [2024-10-09 10:52:23.750919] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:12:04.504 [2024-10-09 10:52:23.750970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.504 [2024-10-09 10:52:23.887854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.504 [2024-10-09 10:52:23.919288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.504 [2024-10-09 10:52:23.937268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.504 [2024-10-09 10:52:23.937298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.504 [2024-10-09 10:52:23.937306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.504 [2024-10-09 10:52:23.937313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.504 [2024-10-09 10:52:23.937319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.504 [2024-10-09 10:52:23.939054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.504 [2024-10-09 10:52:23.939168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.504 [2024-10-09 10:52:23.939306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.504 [2024-10-09 10:52:23.939306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 [2024-10-09 10:52:24.596776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 Null1 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 [2024-10-09 10:52:24.654313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.765 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.765 Null2 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 Null3 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.766 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:04.766 Null4 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:05.028 00:12:05.028 Discovery Log Number of Records 6, Generation counter 6 00:12:05.028 =====Discovery Log Entry 0====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: current discovery subsystem 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4420 00:12:05.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: explicit discovery connections, duplicate discovery information 00:12:05.028 sectype: none 00:12:05.028 =====Discovery Log Entry 1====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: nvme subsystem 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4420 00:12:05.028 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: none 00:12:05.028 sectype: none 00:12:05.028 =====Discovery Log Entry 2====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: nvme subsystem 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4420 00:12:05.028 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: none 00:12:05.028 sectype: none 00:12:05.028 =====Discovery Log Entry 3====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: nvme subsystem 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4420 00:12:05.028 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: none 00:12:05.028 sectype: none 00:12:05.028 =====Discovery Log Entry 4====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: nvme subsystem 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4420 00:12:05.028 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: none 00:12:05.028 sectype: none 00:12:05.028 =====Discovery Log Entry 5====== 00:12:05.028 trtype: tcp 00:12:05.028 adrfam: ipv4 00:12:05.028 subtype: discovery subsystem referral 00:12:05.028 treq: not required 00:12:05.028 portid: 0 00:12:05.028 trsvcid: 4430 00:12:05.028 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.028 traddr: 10.0.0.2 00:12:05.028 eflags: none 00:12:05.028 sectype: none 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:05.028 Perform nvmf subsystem discovery via RPC 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.028 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.028 [ 00:12:05.028 { 00:12:05.028 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:05.028 "subtype": "Discovery", 00:12:05.028 "listen_addresses": [ 00:12:05.028 { 00:12:05.028 "trtype": "TCP", 00:12:05.028 "adrfam": "IPv4", 00:12:05.028 "traddr": "10.0.0.2", 00:12:05.028 "trsvcid": "4420" 00:12:05.028 } 00:12:05.028 ], 00:12:05.028 "allow_any_host": true, 00:12:05.028 "hosts": [] 00:12:05.028 }, 00:12:05.028 { 00:12:05.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.028 "subtype": "NVMe", 00:12:05.028 "listen_addresses": [ 00:12:05.028 { 00:12:05.028 "trtype": "TCP", 00:12:05.028 "adrfam": "IPv4", 00:12:05.028 "traddr": "10.0.0.2", 00:12:05.028 "trsvcid": "4420" 00:12:05.028 } 00:12:05.028 ], 00:12:05.028 "allow_any_host": true, 00:12:05.028 "hosts": [], 00:12:05.028 "serial_number": "SPDK00000000000001", 00:12:05.028 "model_number": "SPDK bdev Controller", 00:12:05.028 "max_namespaces": 32, 00:12:05.028 "min_cntlid": 1, 00:12:05.028 "max_cntlid": 65519, 00:12:05.028 "namespaces": [ 00:12:05.028 { 00:12:05.028 "nsid": 1, 00:12:05.028 "bdev_name": "Null1", 00:12:05.028 "name": "Null1", 00:12:05.028 "nguid": "B68C392A92114AD9B2014D60FE059E3F", 00:12:05.028 "uuid": "b68c392a-9211-4ad9-b201-4d60fe059e3f" 00:12:05.028 } 00:12:05.028 ] 00:12:05.028 }, 00:12:05.028 { 00:12:05.028 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:05.028 "subtype": "NVMe", 00:12:05.028 "listen_addresses": [ 00:12:05.028 { 00:12:05.028 "trtype": "TCP", 00:12:05.028 "adrfam": "IPv4", 00:12:05.028 "traddr": "10.0.0.2", 00:12:05.028 "trsvcid": "4420" 00:12:05.028 } 00:12:05.028 ], 00:12:05.028 "allow_any_host": true, 00:12:05.028 "hosts": [], 00:12:05.028 "serial_number": "SPDK00000000000002", 00:12:05.028 "model_number": "SPDK bdev Controller", 00:12:05.028 "max_namespaces": 32, 00:12:05.028 "min_cntlid": 1, 00:12:05.028 "max_cntlid": 65519, 00:12:05.028 "namespaces": [ 00:12:05.028 { 00:12:05.028 "nsid": 1, 00:12:05.028 "bdev_name": "Null2", 00:12:05.028 "name": "Null2", 00:12:05.028 "nguid": "7E4B37E2F1864DBB804775F7DD07BCC1", 00:12:05.028 "uuid": "7e4b37e2-f186-4dbb-8047-75f7dd07bcc1" 00:12:05.028 } 00:12:05.028 ] 00:12:05.028 }, 00:12:05.028 { 00:12:05.028 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:05.028 "subtype": "NVMe", 00:12:05.028 "listen_addresses": [ 00:12:05.028 { 00:12:05.028 "trtype": "TCP", 00:12:05.028 "adrfam": "IPv4", 00:12:05.028 "traddr": "10.0.0.2", 00:12:05.028 "trsvcid": "4420" 00:12:05.028 } 00:12:05.028 ], 00:12:05.028 "allow_any_host": true, 00:12:05.028 "hosts": [], 00:12:05.028 "serial_number": "SPDK00000000000003", 00:12:05.028 "model_number": "SPDK bdev Controller", 00:12:05.028 "max_namespaces": 32, 00:12:05.028 "min_cntlid": 1, 00:12:05.028 "max_cntlid": 65519, 00:12:05.028 "namespaces": [ 00:12:05.028 { 00:12:05.028 "nsid": 1, 00:12:05.028 "bdev_name": "Null3", 00:12:05.028 "name": "Null3", 00:12:05.028 "nguid": "5E25066F126A4A10A9EF57EFBA95C970", 00:12:05.028 "uuid": "5e25066f-126a-4a10-a9ef-57efba95c970" 00:12:05.028 } 00:12:05.028 ] 00:12:05.028 }, 00:12:05.028 { 00:12:05.028 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:05.028 "subtype": "NVMe", 00:12:05.029 "listen_addresses": [ 00:12:05.029 { 00:12:05.029 "trtype": "TCP", 00:12:05.029 "adrfam": "IPv4", 00:12:05.029 "traddr": "10.0.0.2", 00:12:05.029 "trsvcid": "4420" 00:12:05.029 } 00:12:05.029 ], 00:12:05.029 "allow_any_host": true, 00:12:05.029 "hosts": [], 00:12:05.029 "serial_number": "SPDK00000000000004", 00:12:05.029 "model_number": "SPDK bdev Controller", 00:12:05.029 "max_namespaces": 32, 00:12:05.029 "min_cntlid": 1, 00:12:05.029 "max_cntlid": 65519, 00:12:05.029 "namespaces": [ 00:12:05.029 { 00:12:05.029 "nsid": 1, 00:12:05.029 "bdev_name": "Null4", 00:12:05.029 "name": "Null4", 00:12:05.029 "nguid": "28AA8A88722549CFB82B03C227410005", 00:12:05.029 "uuid": "28aa8a88-7225-49cf-b82b-03c227410005" 00:12:05.029 } 00:12:05.029 ] 00:12:05.029 } 00:12:05.029 ] 00:12:05.029 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.029 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.029 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.291 rmmod nvme_tcp 00:12:05.291 rmmod nvme_fabrics 00:12:05.291 rmmod nvme_keyring 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1729273 ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1729273 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1729273 ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1729273 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729273 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729273' 00:12:05.291 killing process with pid 1729273 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1729273 00:12:05.291 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1729273 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.551 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.464 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.726 00:12:07.726 real 0m11.419s 00:12:07.726 user 0m8.193s 00:12:07.726 sys 0m5.923s 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.726 ************************************ 00:12:07.726 END TEST nvmf_target_discovery 00:12:07.726 ************************************ 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.726 ************************************ 00:12:07.726 START TEST nvmf_referrals 00:12:07.726 ************************************ 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:07.726 * Looking for test storage... 00:12:07.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:07.726 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:07.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.988 --rc genhtml_branch_coverage=1 00:12:07.988 --rc genhtml_function_coverage=1 00:12:07.988 --rc genhtml_legend=1 00:12:07.988 --rc geninfo_all_blocks=1 00:12:07.988 --rc geninfo_unexecuted_blocks=1 00:12:07.988 00:12:07.988 ' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:07.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.988 --rc genhtml_branch_coverage=1 00:12:07.988 --rc genhtml_function_coverage=1 00:12:07.988 --rc genhtml_legend=1 00:12:07.988 --rc geninfo_all_blocks=1 00:12:07.988 --rc geninfo_unexecuted_blocks=1 00:12:07.988 00:12:07.988 ' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:07.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.988 --rc genhtml_branch_coverage=1 00:12:07.988 --rc genhtml_function_coverage=1 00:12:07.988 --rc genhtml_legend=1 00:12:07.988 --rc geninfo_all_blocks=1 00:12:07.988 --rc geninfo_unexecuted_blocks=1 00:12:07.988 00:12:07.988 ' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:07.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.988 --rc genhtml_branch_coverage=1 00:12:07.988 --rc genhtml_function_coverage=1 00:12:07.988 --rc genhtml_legend=1 00:12:07.988 --rc geninfo_all_blocks=1 00:12:07.988 --rc geninfo_unexecuted_blocks=1 00:12:07.988 00:12:07.988 ' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.988 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.989 10:52:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.124 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:16.125 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:16.125 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:16.125 Found net devices under 0000:31:00.0: cvl_0_0 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:16.125 Found net devices under 0000:31:00.1: cvl_0_1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:12:16.125 00:12:16.125 --- 10.0.0.2 ping statistics --- 00:12:16.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.125 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:16.125 00:12:16.125 --- 10.0.0.1 ping statistics --- 00:12:16.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.125 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1734021 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1734021 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1734021 ']' 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.125 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.125 [2024-10-09 10:52:35.458947] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:12:16.125 [2024-10-09 10:52:35.459013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.125 [2024-10-09 10:52:35.598538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:16.125 [2024-10-09 10:52:35.630273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.125 [2024-10-09 10:52:35.648964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.125 [2024-10-09 10:52:35.648995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.125 [2024-10-09 10:52:35.649002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.125 [2024-10-09 10:52:35.649009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.125 [2024-10-09 10:52:35.649015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.125 [2024-10-09 10:52:35.650520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.125 [2024-10-09 10:52:35.650718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.125 [2024-10-09 10:52:35.650719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.125 [2024-10-09 10:52:35.650581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 [2024-10-09 10:52:36.304229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 [2024-10-09 10:52:36.320408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.645 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.906 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.166 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.426 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.686 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.946 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.205 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.466 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.726 rmmod nvme_tcp 00:12:18.726 rmmod nvme_fabrics 00:12:18.726 rmmod nvme_keyring 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1734021 ']' 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1734021 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1734021 ']' 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1734021 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1734021 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1734021' 00:12:18.726 killing process with pid 1734021 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1734021 00:12:18.726 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1734021 00:12:18.985 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:18.985 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.986 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.530 00:12:21.530 real 0m13.359s 00:12:21.530 user 0m15.789s 00:12:21.530 sys 0m6.580s 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 ************************************ 00:12:21.530 END TEST nvmf_referrals 00:12:21.530 ************************************ 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 ************************************ 00:12:21.530 START TEST nvmf_connect_disconnect 00:12:21.530 ************************************ 00:12:21.530 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.530 * Looking for test storage... 00:12:21.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:21.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.530 --rc genhtml_branch_coverage=1 00:12:21.530 --rc genhtml_function_coverage=1 00:12:21.530 --rc genhtml_legend=1 00:12:21.530 --rc geninfo_all_blocks=1 00:12:21.530 --rc geninfo_unexecuted_blocks=1 00:12:21.530 00:12:21.530 ' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:21.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.530 --rc genhtml_branch_coverage=1 00:12:21.530 --rc genhtml_function_coverage=1 00:12:21.530 --rc genhtml_legend=1 00:12:21.530 --rc geninfo_all_blocks=1 00:12:21.530 --rc geninfo_unexecuted_blocks=1 00:12:21.530 00:12:21.530 ' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:21.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.530 --rc genhtml_branch_coverage=1 00:12:21.530 --rc genhtml_function_coverage=1 00:12:21.530 --rc genhtml_legend=1 00:12:21.530 --rc geninfo_all_blocks=1 00:12:21.530 --rc geninfo_unexecuted_blocks=1 00:12:21.530 00:12:21.530 ' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:21.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.530 --rc genhtml_branch_coverage=1 00:12:21.530 --rc genhtml_function_coverage=1 00:12:21.530 --rc genhtml_legend=1 00:12:21.530 --rc geninfo_all_blocks=1 00:12:21.530 --rc geninfo_unexecuted_blocks=1 00:12:21.530 00:12:21.530 ' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.530 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:21.531 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:29.678 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:29.678 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:29.678 Found net devices under 0000:31:00.0: cvl_0_0 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:29.678 Found net devices under 0000:31:00.1: cvl_0_1 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.678 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:12:29.679 00:12:29.679 --- 10.0.0.2 ping statistics --- 00:12:29.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.679 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:12:29.679 00:12:29.679 --- 10.0.0.1 ping statistics --- 00:12:29.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.679 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1739176 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1739176 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1739176 ']' 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.679 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.679 [2024-10-09 10:52:48.922335] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:12:29.679 [2024-10-09 10:52:48.922385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.679 [2024-10-09 10:52:49.059665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:29.679 [2024-10-09 10:52:49.092017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.679 [2024-10-09 10:52:49.111756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.679 [2024-10-09 10:52:49.111788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.679 [2024-10-09 10:52:49.111800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.679 [2024-10-09 10:52:49.111806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.679 [2024-10-09 10:52:49.111812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.679 [2024-10-09 10:52:49.113420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.679 [2024-10-09 10:52:49.113559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.679 [2024-10-09 10:52:49.113619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.679 [2024-10-09 10:52:49.113620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 [2024-10-09 10:52:49.767783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.940 [2024-10-09 10:52:49.835613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:29.940 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:32.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.223 [2024-10-09 10:54:02.682418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74660 is same with the state(6) to be set 00:13:43.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.690 [2024-10-09 10:54:47.357408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74660 is same with the state(6) to be set 00:14:27.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.250 rmmod nvme_tcp 00:16:24.250 rmmod nvme_fabrics 00:16:24.250 rmmod nvme_keyring 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1739176 ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1739176 ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1739176' 00:16:24.250 killing process with pid 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1739176 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.250 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.162 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:26.162 00:16:26.162 real 4m4.991s 00:16:26.162 user 15m29.361s 00:16:26.162 sys 0m28.187s 00:16:26.162 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.162 10:56:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:26.162 ************************************ 00:16:26.162 END TEST nvmf_connect_disconnect 00:16:26.162 ************************************ 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.162 ************************************ 00:16:26.162 START TEST nvmf_multitarget 00:16:26.162 ************************************ 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:26.162 * Looking for test storage... 00:16:26.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:26.162 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.423 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:26.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.424 --rc genhtml_branch_coverage=1 00:16:26.424 --rc genhtml_function_coverage=1 00:16:26.424 --rc genhtml_legend=1 00:16:26.424 --rc geninfo_all_blocks=1 00:16:26.424 --rc geninfo_unexecuted_blocks=1 00:16:26.424 00:16:26.424 ' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:26.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.424 --rc genhtml_branch_coverage=1 00:16:26.424 --rc genhtml_function_coverage=1 00:16:26.424 --rc genhtml_legend=1 00:16:26.424 --rc geninfo_all_blocks=1 00:16:26.424 --rc geninfo_unexecuted_blocks=1 00:16:26.424 00:16:26.424 ' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:26.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.424 --rc genhtml_branch_coverage=1 00:16:26.424 --rc genhtml_function_coverage=1 00:16:26.424 --rc genhtml_legend=1 00:16:26.424 --rc geninfo_all_blocks=1 00:16:26.424 --rc geninfo_unexecuted_blocks=1 00:16:26.424 00:16:26.424 ' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:26.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.424 --rc genhtml_branch_coverage=1 00:16:26.424 --rc genhtml_function_coverage=1 00:16:26.424 --rc genhtml_legend=1 00:16:26.424 --rc geninfo_all_blocks=1 00:16:26.424 --rc geninfo_unexecuted_blocks=1 00:16:26.424 00:16:26.424 ' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.424 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.565 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:34.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:34.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:34.566 Found net devices under 0000:31:00.0: cvl_0_0 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:34.566 Found net devices under 0000:31:00.1: cvl_0_1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:16:34.566 00:16:34.566 --- 10.0.0.2 ping statistics --- 00:16:34.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.566 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:16:34.566 00:16:34.566 --- 10.0.0.1 ping statistics --- 00:16:34.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.566 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:34.566 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1790740 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1790740 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1790740 ']' 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.567 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.567 [2024-10-09 10:56:53.883172] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:16:34.567 [2024-10-09 10:56:53.883242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.567 [2024-10-09 10:56:54.024695] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:34.567 [2024-10-09 10:56:54.055827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.567 [2024-10-09 10:56:54.073813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.567 [2024-10-09 10:56:54.073847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.567 [2024-10-09 10:56:54.073855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.567 [2024-10-09 10:56:54.073862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.567 [2024-10-09 10:56:54.073868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.567 [2024-10-09 10:56:54.075582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.567 [2024-10-09 10:56:54.075836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.567 [2024-10-09 10:56:54.075995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.567 [2024-10-09 10:56:54.075996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.826 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:35.086 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:35.086 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:35.086 "nvmf_tgt_1" 00:16:35.086 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:35.086 "nvmf_tgt_2" 00:16:35.086 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:35.086 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:35.346 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:35.346 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:35.346 true 00:16:35.346 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:35.346 true 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.606 rmmod nvme_tcp 00:16:35.606 rmmod nvme_fabrics 00:16:35.606 rmmod nvme_keyring 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1790740 ']' 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1790740 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1790740 ']' 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1790740 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.606 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1790740 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1790740' 00:16:35.866 killing process with pid 1790740 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1790740 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1790740 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.866 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.406 00:16:38.406 real 0m11.748s 00:16:38.406 user 0m9.472s 00:16:38.406 sys 0m6.181s 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 ************************************ 00:16:38.406 END TEST nvmf_multitarget 00:16:38.406 ************************************ 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 ************************************ 00:16:38.406 START TEST nvmf_rpc 00:16:38.406 ************************************ 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:38.406 * Looking for test storage... 00:16:38.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:38.406 10:56:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.406 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:38.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.407 --rc genhtml_branch_coverage=1 00:16:38.407 --rc genhtml_function_coverage=1 00:16:38.407 --rc genhtml_legend=1 00:16:38.407 --rc geninfo_all_blocks=1 00:16:38.407 --rc geninfo_unexecuted_blocks=1 00:16:38.407 00:16:38.407 ' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:38.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.407 --rc genhtml_branch_coverage=1 00:16:38.407 --rc genhtml_function_coverage=1 00:16:38.407 --rc genhtml_legend=1 00:16:38.407 --rc geninfo_all_blocks=1 00:16:38.407 --rc geninfo_unexecuted_blocks=1 00:16:38.407 00:16:38.407 ' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:38.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.407 --rc genhtml_branch_coverage=1 00:16:38.407 --rc genhtml_function_coverage=1 00:16:38.407 --rc genhtml_legend=1 00:16:38.407 --rc geninfo_all_blocks=1 00:16:38.407 --rc geninfo_unexecuted_blocks=1 00:16:38.407 00:16:38.407 ' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:38.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.407 --rc genhtml_branch_coverage=1 00:16:38.407 --rc genhtml_function_coverage=1 00:16:38.407 --rc genhtml_legend=1 00:16:38.407 --rc geninfo_all_blocks=1 00:16:38.407 --rc geninfo_unexecuted_blocks=1 00:16:38.407 00:16:38.407 ' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.407 10:56:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:46.541 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:46.541 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:46.541 Found net devices under 0000:31:00.0: cvl_0_0 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.541 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:46.542 Found net devices under 0000:31:00.1: cvl_0_1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:16:46.542 00:16:46.542 --- 10.0.0.2 ping statistics --- 00:16:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.542 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:16:46.542 00:16:46.542 --- 10.0.0.1 ping statistics --- 00:16:46.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.542 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1795434 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1795434 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1795434 ']' 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 10:57:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.542 [2024-10-09 10:57:05.657283] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:16:46.542 [2024-10-09 10:57:05.657342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.542 [2024-10-09 10:57:05.797624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:46.542 [2024-10-09 10:57:05.828718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.542 [2024-10-09 10:57:05.846550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.542 [2024-10-09 10:57:05.846579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.542 [2024-10-09 10:57:05.846587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.542 [2024-10-09 10:57:05.846594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.542 [2024-10-09 10:57:05.846600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.542 [2024-10-09 10:57:05.848242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.542 [2024-10-09 10:57:05.848357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.542 [2024-10-09 10:57:05.848513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.542 [2024-10-09 10:57:05.848513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:46.542 "tick_rate": 2394400000, 00:16:46.542 "poll_groups": [ 00:16:46.542 { 00:16:46.542 "name": "nvmf_tgt_poll_group_000", 00:16:46.542 "admin_qpairs": 0, 00:16:46.542 "io_qpairs": 0, 00:16:46.542 "current_admin_qpairs": 0, 00:16:46.542 "current_io_qpairs": 0, 00:16:46.542 "pending_bdev_io": 0, 00:16:46.542 "completed_nvme_io": 0, 00:16:46.542 "transports": [] 00:16:46.542 }, 00:16:46.542 { 00:16:46.542 "name": "nvmf_tgt_poll_group_001", 00:16:46.542 "admin_qpairs": 0, 00:16:46.542 "io_qpairs": 0, 00:16:46.542 "current_admin_qpairs": 0, 00:16:46.542 "current_io_qpairs": 0, 00:16:46.542 "pending_bdev_io": 0, 00:16:46.542 "completed_nvme_io": 0, 00:16:46.542 "transports": [] 00:16:46.542 }, 00:16:46.542 { 00:16:46.542 "name": "nvmf_tgt_poll_group_002", 00:16:46.542 "admin_qpairs": 0, 00:16:46.542 "io_qpairs": 0, 00:16:46.542 "current_admin_qpairs": 0, 00:16:46.542 "current_io_qpairs": 0, 00:16:46.542 "pending_bdev_io": 0, 00:16:46.542 "completed_nvme_io": 0, 00:16:46.542 "transports": [] 00:16:46.542 }, 00:16:46.542 { 00:16:46.542 "name": "nvmf_tgt_poll_group_003", 00:16:46.542 "admin_qpairs": 0, 00:16:46.542 "io_qpairs": 0, 00:16:46.542 "current_admin_qpairs": 0, 00:16:46.542 "current_io_qpairs": 0, 00:16:46.542 "pending_bdev_io": 0, 00:16:46.542 "completed_nvme_io": 0, 00:16:46.542 "transports": [] 00:16:46.542 } 00:16:46.542 ] 00:16:46.542 }' 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:46.542 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 [2024-10-09 10:57:06.630227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:46.802 "tick_rate": 2394400000, 00:16:46.802 "poll_groups": [ 00:16:46.802 { 00:16:46.802 "name": "nvmf_tgt_poll_group_000", 00:16:46.802 "admin_qpairs": 0, 00:16:46.802 "io_qpairs": 0, 00:16:46.802 "current_admin_qpairs": 0, 00:16:46.802 "current_io_qpairs": 0, 00:16:46.802 "pending_bdev_io": 0, 00:16:46.802 "completed_nvme_io": 0, 00:16:46.802 "transports": [ 00:16:46.802 { 00:16:46.802 "trtype": "TCP" 00:16:46.802 } 00:16:46.802 ] 00:16:46.802 }, 00:16:46.802 { 00:16:46.802 "name": "nvmf_tgt_poll_group_001", 00:16:46.802 "admin_qpairs": 0, 00:16:46.802 "io_qpairs": 0, 00:16:46.802 "current_admin_qpairs": 0, 00:16:46.802 "current_io_qpairs": 0, 00:16:46.802 "pending_bdev_io": 0, 00:16:46.802 "completed_nvme_io": 0, 00:16:46.802 "transports": [ 00:16:46.802 { 00:16:46.802 "trtype": "TCP" 00:16:46.802 } 00:16:46.802 ] 00:16:46.802 }, 00:16:46.802 { 00:16:46.802 "name": "nvmf_tgt_poll_group_002", 00:16:46.802 "admin_qpairs": 0, 00:16:46.802 "io_qpairs": 0, 00:16:46.802 "current_admin_qpairs": 0, 00:16:46.802 "current_io_qpairs": 0, 00:16:46.802 "pending_bdev_io": 0, 00:16:46.802 "completed_nvme_io": 0, 00:16:46.802 "transports": [ 00:16:46.802 { 00:16:46.802 "trtype": "TCP" 00:16:46.802 } 00:16:46.802 ] 00:16:46.802 }, 00:16:46.802 { 00:16:46.802 "name": "nvmf_tgt_poll_group_003", 00:16:46.802 "admin_qpairs": 0, 00:16:46.802 "io_qpairs": 0, 00:16:46.802 "current_admin_qpairs": 0, 00:16:46.802 "current_io_qpairs": 0, 00:16:46.802 "pending_bdev_io": 0, 00:16:46.802 "completed_nvme_io": 0, 00:16:46.802 "transports": [ 00:16:46.802 { 00:16:46.802 "trtype": "TCP" 00:16:46.802 } 00:16:46.802 ] 00:16:46.802 } 00:16:46.802 ] 00:16:46.802 }' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 Malloc1 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.802 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.061 [2024-10-09 10:57:06.832724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:16:47.061 [2024-10-09 10:57:06.869411] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:47.061 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:47.061 could not add new controller: failed to write to nvme-fabrics device 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.061 10:57:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.971 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.971 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:48.971 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.971 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:48.971 10:57:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.880 [2024-10-09 10:57:10.628292] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:16:50.880 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:50.880 could not add new controller: failed to write to nvme-fabrics device 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.880 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:50.881 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.881 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.881 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.881 10:57:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:52.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:52.262 10:57:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 [2024-10-09 10:57:14.436897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.802 10:57:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.183 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:56.183 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:56.183 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.183 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:56.183 10:57:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:58.094 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:58.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 [2024-10-09 10:57:18.195413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.355 10:57:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:00.264 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:00.264 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:00.264 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.264 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:00.264 10:57:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.276 [2024-10-09 10:57:21.954750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.276 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.277 10:57:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.656 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:03.656 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:03.656 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:03.656 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:03.656 10:57:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:05.564 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:05.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 [2024-10-09 10:57:25.674400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.824 10:57:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.732 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:07.732 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:07.732 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.732 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:07.732 10:57:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:09.640 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 [2024-10-09 10:57:29.442074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.641 10:57:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:11.549 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:11.549 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:17:11.549 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:11.549 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:11.549 10:57:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:13.458 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 [2024-10-09 10:57:33.216008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 [2024-10-09 10:57:33.284007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 [2024-10-09 10:57:33.352033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 [2024-10-09 10:57:33.424105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.459 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:13.719 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 [2024-10-09 10:57:33.492172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:13.720 "tick_rate": 2394400000, 00:17:13.720 "poll_groups": [ 00:17:13.720 { 00:17:13.720 "name": "nvmf_tgt_poll_group_000", 00:17:13.720 "admin_qpairs": 0, 00:17:13.720 "io_qpairs": 224, 00:17:13.720 "current_admin_qpairs": 0, 00:17:13.720 "current_io_qpairs": 0, 00:17:13.720 "pending_bdev_io": 0, 00:17:13.720 "completed_nvme_io": 225, 00:17:13.720 "transports": [ 00:17:13.720 { 00:17:13.720 "trtype": "TCP" 00:17:13.720 } 00:17:13.720 ] 00:17:13.720 }, 00:17:13.720 { 00:17:13.720 "name": "nvmf_tgt_poll_group_001", 00:17:13.720 "admin_qpairs": 1, 00:17:13.720 "io_qpairs": 223, 00:17:13.720 "current_admin_qpairs": 0, 00:17:13.720 "current_io_qpairs": 0, 00:17:13.720 "pending_bdev_io": 0, 00:17:13.720 "completed_nvme_io": 274, 00:17:13.720 "transports": [ 00:17:13.720 { 00:17:13.720 "trtype": "TCP" 00:17:13.720 } 00:17:13.720 ] 00:17:13.720 }, 00:17:13.720 { 00:17:13.720 "name": "nvmf_tgt_poll_group_002", 00:17:13.720 "admin_qpairs": 6, 00:17:13.720 "io_qpairs": 218, 00:17:13.720 "current_admin_qpairs": 0, 00:17:13.720 "current_io_qpairs": 0, 00:17:13.720 "pending_bdev_io": 0, 00:17:13.720 "completed_nvme_io": 514, 00:17:13.720 "transports": [ 00:17:13.720 { 00:17:13.720 "trtype": "TCP" 00:17:13.720 } 00:17:13.720 ] 00:17:13.720 }, 00:17:13.720 { 00:17:13.720 "name": "nvmf_tgt_poll_group_003", 00:17:13.720 "admin_qpairs": 0, 00:17:13.720 "io_qpairs": 224, 00:17:13.720 "current_admin_qpairs": 0, 00:17:13.720 "current_io_qpairs": 0, 00:17:13.720 "pending_bdev_io": 0, 00:17:13.720 "completed_nvme_io": 226, 00:17:13.720 "transports": [ 00:17:13.720 { 00:17:13.720 "trtype": "TCP" 00:17:13.720 } 00:17:13.720 ] 00:17:13.720 } 00:17:13.720 ] 00:17:13.720 }' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.720 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:13.720 rmmod nvme_tcp 00:17:13.720 rmmod nvme_fabrics 00:17:13.720 rmmod nvme_keyring 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1795434 ']' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1795434 ']' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1795434' 00:17:13.981 killing process with pid 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1795434 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.981 10:57:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:16.521 00:17:16.521 real 0m38.134s 00:17:16.521 user 1m54.243s 00:17:16.521 sys 0m7.804s 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.521 ************************************ 00:17:16.521 END TEST nvmf_rpc 00:17:16.521 ************************************ 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.521 ************************************ 00:17:16.521 START TEST nvmf_invalid 00:17:16.521 ************************************ 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:16.521 * Looking for test storage... 00:17:16.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:16.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.521 --rc genhtml_branch_coverage=1 00:17:16.521 --rc genhtml_function_coverage=1 00:17:16.521 --rc genhtml_legend=1 00:17:16.521 --rc geninfo_all_blocks=1 00:17:16.521 --rc geninfo_unexecuted_blocks=1 00:17:16.521 00:17:16.521 ' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:16.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.521 --rc genhtml_branch_coverage=1 00:17:16.521 --rc genhtml_function_coverage=1 00:17:16.521 --rc genhtml_legend=1 00:17:16.521 --rc geninfo_all_blocks=1 00:17:16.521 --rc geninfo_unexecuted_blocks=1 00:17:16.521 00:17:16.521 ' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:16.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.521 --rc genhtml_branch_coverage=1 00:17:16.521 --rc genhtml_function_coverage=1 00:17:16.521 --rc genhtml_legend=1 00:17:16.521 --rc geninfo_all_blocks=1 00:17:16.521 --rc geninfo_unexecuted_blocks=1 00:17:16.521 00:17:16.521 ' 00:17:16.521 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:16.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.521 --rc genhtml_branch_coverage=1 00:17:16.521 --rc genhtml_function_coverage=1 00:17:16.521 --rc genhtml_legend=1 00:17:16.521 --rc geninfo_all_blocks=1 00:17:16.521 --rc geninfo_unexecuted_blocks=1 00:17:16.521 00:17:16.521 ' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.522 10:57:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:24.654 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:24.655 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:24.655 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:24.655 Found net devices under 0000:31:00.0: cvl_0_0 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:24.655 Found net devices under 0000:31:00.1: cvl_0_1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:24.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:17:24.655 00:17:24.655 --- 10.0.0.2 ping statistics --- 00:17:24.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.655 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:17:24.655 00:17:24.655 --- 10.0.0.1 ping statistics --- 00:17:24.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.655 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1805792 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1805792 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1805792 ']' 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.655 10:57:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:24.655 [2024-10-09 10:57:43.995387] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:17:24.655 [2024-10-09 10:57:43.995450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.655 [2024-10-09 10:57:44.137792] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:24.655 [2024-10-09 10:57:44.169738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.655 [2024-10-09 10:57:44.187535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.655 [2024-10-09 10:57:44.187565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.655 [2024-10-09 10:57:44.187573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.655 [2024-10-09 10:57:44.187580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.655 [2024-10-09 10:57:44.187586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.655 [2024-10-09 10:57:44.189078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.655 [2024-10-09 10:57:44.189191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.655 [2024-10-09 10:57:44.189345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.655 [2024-10-09 10:57:44.189346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:24.915 10:57:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12966 00:17:25.175 [2024-10-09 10:57:45.010888] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:25.175 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:25.175 { 00:17:25.175 "nqn": "nqn.2016-06.io.spdk:cnode12966", 00:17:25.175 "tgt_name": "foobar", 00:17:25.175 "method": "nvmf_create_subsystem", 00:17:25.175 "req_id": 1 00:17:25.175 } 00:17:25.175 Got JSON-RPC error response 00:17:25.175 response: 00:17:25.175 { 00:17:25.175 "code": -32603, 00:17:25.175 "message": "Unable to find target foobar" 00:17:25.175 }' 00:17:25.175 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:25.175 { 00:17:25.175 "nqn": "nqn.2016-06.io.spdk:cnode12966", 00:17:25.175 "tgt_name": "foobar", 00:17:25.175 "method": "nvmf_create_subsystem", 00:17:25.175 "req_id": 1 00:17:25.175 } 00:17:25.175 Got JSON-RPC error response 00:17:25.175 response: 00:17:25.175 { 00:17:25.175 "code": -32603, 00:17:25.175 "message": "Unable to find target foobar" 00:17:25.175 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:25.175 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:25.175 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15498 00:17:25.436 [2024-10-09 10:57:45.199068] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15498: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:25.436 { 00:17:25.436 "nqn": "nqn.2016-06.io.spdk:cnode15498", 00:17:25.436 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:25.436 "method": "nvmf_create_subsystem", 00:17:25.436 "req_id": 1 00:17:25.436 } 00:17:25.436 Got JSON-RPC error response 00:17:25.436 response: 00:17:25.436 { 00:17:25.436 "code": -32602, 00:17:25.436 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:25.436 }' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:25.436 { 00:17:25.436 "nqn": "nqn.2016-06.io.spdk:cnode15498", 00:17:25.436 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:25.436 "method": "nvmf_create_subsystem", 00:17:25.436 "req_id": 1 00:17:25.436 } 00:17:25.436 Got JSON-RPC error response 00:17:25.436 response: 00:17:25.436 { 00:17:25.436 "code": -32602, 00:17:25.436 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:25.436 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16462 00:17:25.436 [2024-10-09 10:57:45.391220] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16462: invalid model number 'SPDK_Controller' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:25.436 { 00:17:25.436 "nqn": "nqn.2016-06.io.spdk:cnode16462", 00:17:25.436 "model_number": "SPDK_Controller\u001f", 00:17:25.436 "method": "nvmf_create_subsystem", 00:17:25.436 "req_id": 1 00:17:25.436 } 00:17:25.436 Got JSON-RPC error response 00:17:25.436 response: 00:17:25.436 { 00:17:25.436 "code": -32602, 00:17:25.436 "message": "Invalid MN SPDK_Controller\u001f" 00:17:25.436 }' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:25.436 { 00:17:25.436 "nqn": "nqn.2016-06.io.spdk:cnode16462", 00:17:25.436 "model_number": "SPDK_Controller\u001f", 00:17:25.436 "method": "nvmf_create_subsystem", 00:17:25.436 "req_id": 1 00:17:25.436 } 00:17:25.436 Got JSON-RPC error response 00:17:25.436 response: 00:17:25.436 { 00:17:25.436 "code": -32602, 00:17:25.436 "message": "Invalid MN SPDK_Controller\u001f" 00:17:25.436 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.436 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:17:25.698 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Jhoibx4tEi'\''>MaJ[@MiY;' 00:17:25.699 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Jhoibx4tEi'\''>MaJ[@MiY;' nqn.2016-06.io.spdk:cnode2209 00:17:25.959 [2024-10-09 10:57:45.747559] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2209: invalid serial number 'Jhoibx4tEi'>MaJ[@MiY;' 00:17:25.959 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:25.959 { 00:17:25.959 "nqn": "nqn.2016-06.io.spdk:cnode2209", 00:17:25.959 "serial_number": "Jhoibx4tEi'\''>MaJ[@MiY;", 00:17:25.959 "method": "nvmf_create_subsystem", 00:17:25.959 "req_id": 1 00:17:25.959 } 00:17:25.959 Got JSON-RPC error response 00:17:25.959 response: 00:17:25.959 { 00:17:25.959 "code": -32602, 00:17:25.959 "message": "Invalid SN Jhoibx4tEi'\''>MaJ[@MiY;" 00:17:25.959 }' 00:17:25.959 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:25.959 { 00:17:25.959 "nqn": "nqn.2016-06.io.spdk:cnode2209", 00:17:25.959 "serial_number": "Jhoibx4tEi'>MaJ[@MiY;", 00:17:25.959 "method": "nvmf_create_subsystem", 00:17:25.959 "req_id": 1 00:17:25.959 } 00:17:25.959 Got JSON-RPC error response 00:17:25.959 response: 00:17:25.959 { 00:17:25.959 "code": -32602, 00:17:25.959 "message": "Invalid SN Jhoibx4tEi'>MaJ[@MiY;" 00:17:25.959 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:25.959 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:25.959 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:25.959 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:25.960 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:26.222 10:57:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:26.222 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Px\k+J `fWY.!B\|W_}htm}jE4Y}5Ovb?=n7GSy$X' 00:17:26.223 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Px\k+J `fWY.!B\|W_}htm}jE4Y}5Ovb?=n7GSy$X' nqn.2016-06.io.spdk:cnode12988 00:17:26.483 [2024-10-09 10:57:46.260018] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12988: invalid model number 'Px\k+J `fWY.!B\|W_}htm}jE4Y}5Ovb?=n7GSy$X' 00:17:26.483 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:26.483 { 00:17:26.483 "nqn": "nqn.2016-06.io.spdk:cnode12988", 00:17:26.483 "model_number": "Px\\k+J `fWY.!B\\|W_}htm}jE4Y}5Ovb?=n7GSy$X", 00:17:26.483 "method": "nvmf_create_subsystem", 00:17:26.483 "req_id": 1 00:17:26.483 } 00:17:26.483 Got JSON-RPC error response 00:17:26.483 response: 00:17:26.483 { 00:17:26.483 "code": -32602, 00:17:26.483 "message": "Invalid MN Px\\k+J `fWY.!B\\|W_}htm}jE4Y}5Ovb?=n7GSy$X" 00:17:26.483 }' 00:17:26.483 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:26.483 { 00:17:26.483 "nqn": "nqn.2016-06.io.spdk:cnode12988", 00:17:26.483 "model_number": "Px\\k+J `fWY.!B\\|W_}htm}jE4Y}5Ovb?=n7GSy$X", 00:17:26.483 "method": "nvmf_create_subsystem", 00:17:26.483 "req_id": 1 00:17:26.483 } 00:17:26.483 Got JSON-RPC error response 00:17:26.483 response: 00:17:26.483 { 00:17:26.483 "code": -32602, 00:17:26.483 "message": "Invalid MN Px\\k+J `fWY.!B\\|W_}htm}jE4Y}5Ovb?=n7GSy$X" 00:17:26.483 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:26.483 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:26.483 [2024-10-09 10:57:46.448280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.483 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:26.743 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:26.743 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:26.743 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:26.743 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:26.743 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:27.003 [2024-10-09 10:57:46.830083] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:27.003 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:27.003 { 00:17:27.003 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:27.003 "listen_address": { 00:17:27.003 "trtype": "tcp", 00:17:27.003 "traddr": "", 00:17:27.003 "trsvcid": "4421" 00:17:27.003 }, 00:17:27.003 "method": "nvmf_subsystem_remove_listener", 00:17:27.003 "req_id": 1 00:17:27.003 } 00:17:27.003 Got JSON-RPC error response 00:17:27.003 response: 00:17:27.003 { 00:17:27.003 "code": -32602, 00:17:27.003 "message": "Invalid parameters" 00:17:27.003 }' 00:17:27.003 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:27.003 { 00:17:27.003 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:27.003 "listen_address": { 00:17:27.003 "trtype": "tcp", 00:17:27.003 "traddr": "", 00:17:27.003 "trsvcid": "4421" 00:17:27.003 }, 00:17:27.003 "method": "nvmf_subsystem_remove_listener", 00:17:27.003 "req_id": 1 00:17:27.003 } 00:17:27.003 Got JSON-RPC error response 00:17:27.003 response: 00:17:27.003 { 00:17:27.003 "code": -32602, 00:17:27.003 "message": "Invalid parameters" 00:17:27.003 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:27.003 10:57:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4925 -i 0 00:17:27.263 [2024-10-09 10:57:47.014195] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4925: invalid cntlid range [0-65519] 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:27.263 { 00:17:27.263 "nqn": "nqn.2016-06.io.spdk:cnode4925", 00:17:27.263 "min_cntlid": 0, 00:17:27.263 "method": "nvmf_create_subsystem", 00:17:27.263 "req_id": 1 00:17:27.263 } 00:17:27.263 Got JSON-RPC error response 00:17:27.263 response: 00:17:27.263 { 00:17:27.263 "code": -32602, 00:17:27.263 "message": "Invalid cntlid range [0-65519]" 00:17:27.263 }' 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:27.263 { 00:17:27.263 "nqn": "nqn.2016-06.io.spdk:cnode4925", 00:17:27.263 "min_cntlid": 0, 00:17:27.263 "method": "nvmf_create_subsystem", 00:17:27.263 "req_id": 1 00:17:27.263 } 00:17:27.263 Got JSON-RPC error response 00:17:27.263 response: 00:17:27.263 { 00:17:27.263 "code": -32602, 00:17:27.263 "message": "Invalid cntlid range [0-65519]" 00:17:27.263 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22010 -i 65520 00:17:27.263 [2024-10-09 10:57:47.198343] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22010: invalid cntlid range [65520-65519] 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:27.263 { 00:17:27.263 "nqn": "nqn.2016-06.io.spdk:cnode22010", 00:17:27.263 "min_cntlid": 65520, 00:17:27.263 "method": "nvmf_create_subsystem", 00:17:27.263 "req_id": 1 00:17:27.263 } 00:17:27.263 Got JSON-RPC error response 00:17:27.263 response: 00:17:27.263 { 00:17:27.263 "code": -32602, 00:17:27.263 "message": "Invalid cntlid range [65520-65519]" 00:17:27.263 }' 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:27.263 { 00:17:27.263 "nqn": "nqn.2016-06.io.spdk:cnode22010", 00:17:27.263 "min_cntlid": 65520, 00:17:27.263 "method": "nvmf_create_subsystem", 00:17:27.263 "req_id": 1 00:17:27.263 } 00:17:27.263 Got JSON-RPC error response 00:17:27.263 response: 00:17:27.263 { 00:17:27.263 "code": -32602, 00:17:27.263 "message": "Invalid cntlid range [65520-65519]" 00:17:27.263 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:27.263 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27036 -I 0 00:17:27.523 [2024-10-09 10:57:47.386521] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27036: invalid cntlid range [1-0] 00:17:27.523 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:27.523 { 00:17:27.523 "nqn": "nqn.2016-06.io.spdk:cnode27036", 00:17:27.523 "max_cntlid": 0, 00:17:27.523 "method": "nvmf_create_subsystem", 00:17:27.523 "req_id": 1 00:17:27.523 } 00:17:27.523 Got JSON-RPC error response 00:17:27.523 response: 00:17:27.523 { 00:17:27.523 "code": -32602, 00:17:27.523 "message": "Invalid cntlid range [1-0]" 00:17:27.523 }' 00:17:27.523 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:27.523 { 00:17:27.523 "nqn": "nqn.2016-06.io.spdk:cnode27036", 00:17:27.523 "max_cntlid": 0, 00:17:27.523 "method": "nvmf_create_subsystem", 00:17:27.523 "req_id": 1 00:17:27.523 } 00:17:27.523 Got JSON-RPC error response 00:17:27.523 response: 00:17:27.523 { 00:17:27.523 "code": -32602, 00:17:27.523 "message": "Invalid cntlid range [1-0]" 00:17:27.523 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:27.523 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17817 -I 65520 00:17:27.783 [2024-10-09 10:57:47.574674] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17817: invalid cntlid range [1-65520] 00:17:27.783 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:27.783 { 00:17:27.783 "nqn": "nqn.2016-06.io.spdk:cnode17817", 00:17:27.783 "max_cntlid": 65520, 00:17:27.783 "method": "nvmf_create_subsystem", 00:17:27.783 "req_id": 1 00:17:27.783 } 00:17:27.783 Got JSON-RPC error response 00:17:27.783 response: 00:17:27.783 { 00:17:27.783 "code": -32602, 00:17:27.783 "message": "Invalid cntlid range [1-65520]" 00:17:27.783 }' 00:17:27.783 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:27.783 { 00:17:27.783 "nqn": "nqn.2016-06.io.spdk:cnode17817", 00:17:27.783 "max_cntlid": 65520, 00:17:27.783 "method": "nvmf_create_subsystem", 00:17:27.783 "req_id": 1 00:17:27.783 } 00:17:27.783 Got JSON-RPC error response 00:17:27.783 response: 00:17:27.783 { 00:17:27.783 "code": -32602, 00:17:27.783 "message": "Invalid cntlid range [1-65520]" 00:17:27.783 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:27.783 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25809 -i 6 -I 5 00:17:27.783 [2024-10-09 10:57:47.762814] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25809: invalid cntlid range [6-5] 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:28.043 { 00:17:28.043 "nqn": "nqn.2016-06.io.spdk:cnode25809", 00:17:28.043 "min_cntlid": 6, 00:17:28.043 "max_cntlid": 5, 00:17:28.043 "method": "nvmf_create_subsystem", 00:17:28.043 "req_id": 1 00:17:28.043 } 00:17:28.043 Got JSON-RPC error response 00:17:28.043 response: 00:17:28.043 { 00:17:28.043 "code": -32602, 00:17:28.043 "message": "Invalid cntlid range [6-5]" 00:17:28.043 }' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:28.043 { 00:17:28.043 "nqn": "nqn.2016-06.io.spdk:cnode25809", 00:17:28.043 "min_cntlid": 6, 00:17:28.043 "max_cntlid": 5, 00:17:28.043 "method": "nvmf_create_subsystem", 00:17:28.043 "req_id": 1 00:17:28.043 } 00:17:28.043 Got JSON-RPC error response 00:17:28.043 response: 00:17:28.043 { 00:17:28.043 "code": -32602, 00:17:28.043 "message": "Invalid cntlid range [6-5]" 00:17:28.043 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:28.043 { 00:17:28.043 "name": "foobar", 00:17:28.043 "method": "nvmf_delete_target", 00:17:28.043 "req_id": 1 00:17:28.043 } 00:17:28.043 Got JSON-RPC error response 00:17:28.043 response: 00:17:28.043 { 00:17:28.043 "code": -32602, 00:17:28.043 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:28.043 }' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:28.043 { 00:17:28.043 "name": "foobar", 00:17:28.043 "method": "nvmf_delete_target", 00:17:28.043 "req_id": 1 00:17:28.043 } 00:17:28.043 Got JSON-RPC error response 00:17:28.043 response: 00:17:28.043 { 00:17:28.043 "code": -32602, 00:17:28.043 "message": "The specified target doesn't exist, cannot delete it." 00:17:28.043 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.043 rmmod nvme_tcp 00:17:28.043 rmmod nvme_fabrics 00:17:28.043 rmmod nvme_keyring 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1805792 ']' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1805792 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1805792 ']' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1805792 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.043 10:57:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1805792 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1805792' 00:17:28.303 killing process with pid 1805792 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1805792 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1805792 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.303 10:57:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.845 00:17:30.845 real 0m14.151s 00:17:30.845 user 0m20.534s 00:17:30.845 sys 0m6.650s 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:30.845 ************************************ 00:17:30.845 END TEST nvmf_invalid 00:17:30.845 ************************************ 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.845 ************************************ 00:17:30.845 START TEST nvmf_connect_stress 00:17:30.845 ************************************ 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:30.845 * Looking for test storage... 00:17:30.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:30.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.845 --rc genhtml_branch_coverage=1 00:17:30.845 --rc genhtml_function_coverage=1 00:17:30.845 --rc genhtml_legend=1 00:17:30.845 --rc geninfo_all_blocks=1 00:17:30.845 --rc geninfo_unexecuted_blocks=1 00:17:30.845 00:17:30.845 ' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:30.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.845 --rc genhtml_branch_coverage=1 00:17:30.845 --rc genhtml_function_coverage=1 00:17:30.845 --rc genhtml_legend=1 00:17:30.845 --rc geninfo_all_blocks=1 00:17:30.845 --rc geninfo_unexecuted_blocks=1 00:17:30.845 00:17:30.845 ' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:30.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.845 --rc genhtml_branch_coverage=1 00:17:30.845 --rc genhtml_function_coverage=1 00:17:30.845 --rc genhtml_legend=1 00:17:30.845 --rc geninfo_all_blocks=1 00:17:30.845 --rc geninfo_unexecuted_blocks=1 00:17:30.845 00:17:30.845 ' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:30.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.845 --rc genhtml_branch_coverage=1 00:17:30.845 --rc genhtml_function_coverage=1 00:17:30.845 --rc genhtml_legend=1 00:17:30.845 --rc geninfo_all_blocks=1 00:17:30.845 --rc geninfo_unexecuted_blocks=1 00:17:30.845 00:17:30.845 ' 00:17:30.845 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.846 10:57:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:38.981 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:38.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:38.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:38.997 Found net devices under 0000:31:00.0: cvl_0_0 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:38.997 Found net devices under 0000:31:00.1: cvl_0_1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:17:38.997 00:17:38.997 --- 10.0.0.2 ping statistics --- 00:17:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.997 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:17:38.997 00:17:38.997 --- 10.0.0.1 ping statistics --- 00:17:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.997 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.997 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1811032 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1811032 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1811032 ']' 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.998 10:57:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 [2024-10-09 10:57:58.047487] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:17:38.998 [2024-10-09 10:57:58.047559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.998 [2024-10-09 10:57:58.189988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.998 [2024-10-09 10:57:58.239224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.998 [2024-10-09 10:57:58.258064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.998 [2024-10-09 10:57:58.258098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.998 [2024-10-09 10:57:58.258106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.998 [2024-10-09 10:57:58.258113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.998 [2024-10-09 10:57:58.258119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.998 [2024-10-09 10:57:58.259486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.998 [2024-10-09 10:57:58.259643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.998 [2024-10-09 10:57:58.259733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 [2024-10-09 10:57:58.905192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 [2024-10-09 10:57:58.929559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.998 NULL1 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1811378 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:38.998 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.258 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.518 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.518 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:39.518 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.518 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.518 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.778 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.778 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:39.778 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.778 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.778 10:57:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.039 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.039 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:40.039 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.039 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.039 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.609 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.609 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:40.609 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.609 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.609 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.870 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.870 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:40.870 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.870 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.870 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.129 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.129 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:41.129 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.129 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.129 10:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.389 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.389 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:41.389 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.389 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.389 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.649 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.649 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:41.649 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.649 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.649 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.219 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.219 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:42.219 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.219 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.219 10:58:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.480 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.480 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:42.480 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.480 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.480 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.741 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.741 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:42.741 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.741 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.741 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.001 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.001 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:43.001 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.001 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.001 10:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.261 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.261 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:43.261 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.261 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.261 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.832 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.832 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:43.832 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.832 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.832 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.092 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.092 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:44.092 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.092 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.092 10:58:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.353 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.353 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:44.353 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.353 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.353 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.613 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.613 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:44.613 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.613 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.613 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.183 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.183 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:45.183 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.183 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.183 10:58:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.443 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.443 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:45.443 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.443 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.443 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.703 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.703 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:45.703 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.703 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.703 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.964 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.965 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:45.965 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.965 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.965 10:58:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.224 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.224 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:46.224 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.224 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.224 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.793 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.793 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:46.793 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.793 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.793 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.053 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.053 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:47.053 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.053 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.053 10:58:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.313 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.313 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:47.313 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.313 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.313 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.572 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.572 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:47.572 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.572 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.572 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.832 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.832 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:47.832 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.832 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.832 10:58:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.401 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.401 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:48.401 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.401 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.402 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.662 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.662 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:48.662 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.662 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.662 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.921 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.921 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:48.921 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.922 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.922 10:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.181 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:49.181 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.181 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.181 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.181 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.441 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1811378 00:17:49.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1811378) - No such process 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1811378 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.442 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.702 rmmod nvme_tcp 00:17:49.702 rmmod nvme_fabrics 00:17:49.702 rmmod nvme_keyring 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1811032 ']' 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1811032 00:17:49.702 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1811032 ']' 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1811032 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1811032 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1811032' 00:17:49.703 killing process with pid 1811032 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1811032 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1811032 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.703 10:58:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.245 00:17:52.245 real 0m21.437s 00:17:52.245 user 0m42.768s 00:17:52.245 sys 0m9.109s 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:52.245 ************************************ 00:17:52.245 END TEST nvmf_connect_stress 00:17:52.245 ************************************ 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.245 ************************************ 00:17:52.245 START TEST nvmf_fused_ordering 00:17:52.245 ************************************ 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.245 * Looking for test storage... 00:17:52.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:52.245 10:58:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:52.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.245 --rc genhtml_branch_coverage=1 00:17:52.245 --rc genhtml_function_coverage=1 00:17:52.245 --rc genhtml_legend=1 00:17:52.245 --rc geninfo_all_blocks=1 00:17:52.245 --rc geninfo_unexecuted_blocks=1 00:17:52.245 00:17:52.245 ' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:52.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.245 --rc genhtml_branch_coverage=1 00:17:52.245 --rc genhtml_function_coverage=1 00:17:52.245 --rc genhtml_legend=1 00:17:52.245 --rc geninfo_all_blocks=1 00:17:52.245 --rc geninfo_unexecuted_blocks=1 00:17:52.245 00:17:52.245 ' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:52.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.245 --rc genhtml_branch_coverage=1 00:17:52.245 --rc genhtml_function_coverage=1 00:17:52.245 --rc genhtml_legend=1 00:17:52.245 --rc geninfo_all_blocks=1 00:17:52.245 --rc geninfo_unexecuted_blocks=1 00:17:52.245 00:17:52.245 ' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:52.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.245 --rc genhtml_branch_coverage=1 00:17:52.245 --rc genhtml_function_coverage=1 00:17:52.245 --rc genhtml_legend=1 00:17:52.245 --rc geninfo_all_blocks=1 00:17:52.245 --rc geninfo_unexecuted_blocks=1 00:17:52.245 00:17:52.245 ' 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.245 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.246 10:58:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:00.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:00.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:00.389 Found net devices under 0000:31:00.0: cvl_0_0 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:00.389 Found net devices under 0000:31:00.1: cvl_0_1 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:18:00.389 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:00.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:18:00.390 00:18:00.390 --- 10.0.0.2 ping statistics --- 00:18:00.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.390 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:18:00.390 00:18:00.390 --- 10.0.0.1 ping statistics --- 00:18:00.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.390 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1817773 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1817773 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1817773 ']' 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.390 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.390 [2024-10-09 10:58:19.767660] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:00.390 [2024-10-09 10:58:19.767719] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.390 [2024-10-09 10:58:19.907947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:00.390 [2024-10-09 10:58:19.959930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.390 [2024-10-09 10:58:19.986008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.390 [2024-10-09 10:58:19.986052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.390 [2024-10-09 10:58:19.986061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.390 [2024-10-09 10:58:19.986068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.390 [2024-10-09 10:58:19.986074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.390 [2024-10-09 10:58:19.986851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 [2024-10-09 10:58:20.634847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 [2024-10-09 10:58:20.659053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 NULL1 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.708 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:01.061 [2024-10-09 10:58:20.729107] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:01.061 [2024-10-09 10:58:20.729164] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817830 ] 00:18:01.061 [2024-10-09 10:58:20.864980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:01.387 Attached to nqn.2016-06.io.spdk:cnode1 00:18:01.387 Namespace ID: 1 size: 1GB 00:18:01.387 fused_ordering(0) 00:18:01.387 fused_ordering(1) 00:18:01.387 fused_ordering(2) 00:18:01.387 fused_ordering(3) 00:18:01.387 fused_ordering(4) 00:18:01.387 fused_ordering(5) 00:18:01.387 fused_ordering(6) 00:18:01.387 fused_ordering(7) 00:18:01.387 fused_ordering(8) 00:18:01.387 fused_ordering(9) 00:18:01.387 fused_ordering(10) 00:18:01.387 fused_ordering(11) 00:18:01.387 fused_ordering(12) 00:18:01.387 fused_ordering(13) 00:18:01.387 fused_ordering(14) 00:18:01.387 fused_ordering(15) 00:18:01.387 fused_ordering(16) 00:18:01.387 fused_ordering(17) 00:18:01.387 fused_ordering(18) 00:18:01.387 fused_ordering(19) 00:18:01.387 fused_ordering(20) 00:18:01.387 fused_ordering(21) 00:18:01.387 fused_ordering(22) 00:18:01.387 fused_ordering(23) 00:18:01.387 fused_ordering(24) 00:18:01.387 fused_ordering(25) 00:18:01.387 fused_ordering(26) 00:18:01.387 fused_ordering(27) 00:18:01.387 fused_ordering(28) 00:18:01.387 fused_ordering(29) 00:18:01.387 fused_ordering(30) 00:18:01.387 fused_ordering(31) 00:18:01.387 fused_ordering(32) 00:18:01.387 fused_ordering(33) 00:18:01.387 fused_ordering(34) 00:18:01.387 fused_ordering(35) 00:18:01.387 fused_ordering(36) 00:18:01.387 fused_ordering(37) 00:18:01.387 fused_ordering(38) 00:18:01.387 fused_ordering(39) 00:18:01.387 fused_ordering(40) 00:18:01.387 fused_ordering(41) 00:18:01.387 fused_ordering(42) 00:18:01.387 fused_ordering(43) 00:18:01.387 fused_ordering(44) 00:18:01.387 fused_ordering(45) 00:18:01.387 fused_ordering(46) 00:18:01.387 fused_ordering(47) 00:18:01.387 fused_ordering(48) 00:18:01.387 fused_ordering(49) 00:18:01.387 fused_ordering(50) 00:18:01.387 fused_ordering(51) 00:18:01.387 fused_ordering(52) 00:18:01.387 fused_ordering(53) 00:18:01.387 fused_ordering(54) 00:18:01.387 fused_ordering(55) 00:18:01.387 fused_ordering(56) 00:18:01.387 fused_ordering(57) 00:18:01.387 fused_ordering(58) 00:18:01.387 fused_ordering(59) 00:18:01.387 fused_ordering(60) 00:18:01.387 fused_ordering(61) 00:18:01.387 fused_ordering(62) 00:18:01.387 fused_ordering(63) 00:18:01.387 fused_ordering(64) 00:18:01.387 fused_ordering(65) 00:18:01.387 fused_ordering(66) 00:18:01.387 fused_ordering(67) 00:18:01.387 fused_ordering(68) 00:18:01.387 fused_ordering(69) 00:18:01.387 fused_ordering(70) 00:18:01.387 fused_ordering(71) 00:18:01.387 fused_ordering(72) 00:18:01.387 fused_ordering(73) 00:18:01.387 fused_ordering(74) 00:18:01.387 fused_ordering(75) 00:18:01.387 fused_ordering(76) 00:18:01.387 fused_ordering(77) 00:18:01.387 fused_ordering(78) 00:18:01.387 fused_ordering(79) 00:18:01.387 fused_ordering(80) 00:18:01.387 fused_ordering(81) 00:18:01.387 fused_ordering(82) 00:18:01.387 fused_ordering(83) 00:18:01.387 fused_ordering(84) 00:18:01.387 fused_ordering(85) 00:18:01.387 fused_ordering(86) 00:18:01.387 fused_ordering(87) 00:18:01.387 fused_ordering(88) 00:18:01.387 fused_ordering(89) 00:18:01.387 fused_ordering(90) 00:18:01.387 fused_ordering(91) 00:18:01.387 fused_ordering(92) 00:18:01.387 fused_ordering(93) 00:18:01.387 fused_ordering(94) 00:18:01.387 fused_ordering(95) 00:18:01.387 fused_ordering(96) 00:18:01.387 fused_ordering(97) 00:18:01.387 fused_ordering(98) 00:18:01.387 fused_ordering(99) 00:18:01.387 fused_ordering(100) 00:18:01.387 fused_ordering(101) 00:18:01.387 fused_ordering(102) 00:18:01.387 fused_ordering(103) 00:18:01.387 fused_ordering(104) 00:18:01.387 fused_ordering(105) 00:18:01.387 fused_ordering(106) 00:18:01.387 fused_ordering(107) 00:18:01.387 fused_ordering(108) 00:18:01.387 fused_ordering(109) 00:18:01.387 fused_ordering(110) 00:18:01.387 fused_ordering(111) 00:18:01.387 fused_ordering(112) 00:18:01.387 fused_ordering(113) 00:18:01.387 fused_ordering(114) 00:18:01.387 fused_ordering(115) 00:18:01.387 fused_ordering(116) 00:18:01.387 fused_ordering(117) 00:18:01.387 fused_ordering(118) 00:18:01.387 fused_ordering(119) 00:18:01.387 fused_ordering(120) 00:18:01.387 fused_ordering(121) 00:18:01.387 fused_ordering(122) 00:18:01.387 fused_ordering(123) 00:18:01.387 fused_ordering(124) 00:18:01.387 fused_ordering(125) 00:18:01.387 fused_ordering(126) 00:18:01.387 fused_ordering(127) 00:18:01.387 fused_ordering(128) 00:18:01.387 fused_ordering(129) 00:18:01.387 fused_ordering(130) 00:18:01.387 fused_ordering(131) 00:18:01.387 fused_ordering(132) 00:18:01.387 fused_ordering(133) 00:18:01.387 fused_ordering(134) 00:18:01.387 fused_ordering(135) 00:18:01.387 fused_ordering(136) 00:18:01.387 fused_ordering(137) 00:18:01.387 fused_ordering(138) 00:18:01.387 fused_ordering(139) 00:18:01.387 fused_ordering(140) 00:18:01.387 fused_ordering(141) 00:18:01.387 fused_ordering(142) 00:18:01.387 fused_ordering(143) 00:18:01.387 fused_ordering(144) 00:18:01.387 fused_ordering(145) 00:18:01.387 fused_ordering(146) 00:18:01.387 fused_ordering(147) 00:18:01.387 fused_ordering(148) 00:18:01.387 fused_ordering(149) 00:18:01.387 fused_ordering(150) 00:18:01.387 fused_ordering(151) 00:18:01.387 fused_ordering(152) 00:18:01.387 fused_ordering(153) 00:18:01.387 fused_ordering(154) 00:18:01.387 fused_ordering(155) 00:18:01.387 fused_ordering(156) 00:18:01.387 fused_ordering(157) 00:18:01.387 fused_ordering(158) 00:18:01.387 fused_ordering(159) 00:18:01.387 fused_ordering(160) 00:18:01.387 fused_ordering(161) 00:18:01.387 fused_ordering(162) 00:18:01.387 fused_ordering(163) 00:18:01.387 fused_ordering(164) 00:18:01.387 fused_ordering(165) 00:18:01.387 fused_ordering(166) 00:18:01.387 fused_ordering(167) 00:18:01.387 fused_ordering(168) 00:18:01.387 fused_ordering(169) 00:18:01.387 fused_ordering(170) 00:18:01.387 fused_ordering(171) 00:18:01.387 fused_ordering(172) 00:18:01.387 fused_ordering(173) 00:18:01.387 fused_ordering(174) 00:18:01.387 fused_ordering(175) 00:18:01.387 fused_ordering(176) 00:18:01.387 fused_ordering(177) 00:18:01.387 fused_ordering(178) 00:18:01.387 fused_ordering(179) 00:18:01.387 fused_ordering(180) 00:18:01.387 fused_ordering(181) 00:18:01.387 fused_ordering(182) 00:18:01.387 fused_ordering(183) 00:18:01.387 fused_ordering(184) 00:18:01.387 fused_ordering(185) 00:18:01.387 fused_ordering(186) 00:18:01.387 fused_ordering(187) 00:18:01.387 fused_ordering(188) 00:18:01.387 fused_ordering(189) 00:18:01.387 fused_ordering(190) 00:18:01.387 fused_ordering(191) 00:18:01.387 fused_ordering(192) 00:18:01.387 fused_ordering(193) 00:18:01.387 fused_ordering(194) 00:18:01.387 fused_ordering(195) 00:18:01.387 fused_ordering(196) 00:18:01.387 fused_ordering(197) 00:18:01.387 fused_ordering(198) 00:18:01.387 fused_ordering(199) 00:18:01.387 fused_ordering(200) 00:18:01.387 fused_ordering(201) 00:18:01.387 fused_ordering(202) 00:18:01.387 fused_ordering(203) 00:18:01.387 fused_ordering(204) 00:18:01.387 fused_ordering(205) 00:18:01.648 fused_ordering(206) 00:18:01.648 fused_ordering(207) 00:18:01.648 fused_ordering(208) 00:18:01.648 fused_ordering(209) 00:18:01.648 fused_ordering(210) 00:18:01.648 fused_ordering(211) 00:18:01.648 fused_ordering(212) 00:18:01.648 fused_ordering(213) 00:18:01.648 fused_ordering(214) 00:18:01.648 fused_ordering(215) 00:18:01.648 fused_ordering(216) 00:18:01.648 fused_ordering(217) 00:18:01.648 fused_ordering(218) 00:18:01.648 fused_ordering(219) 00:18:01.648 fused_ordering(220) 00:18:01.648 fused_ordering(221) 00:18:01.648 fused_ordering(222) 00:18:01.648 fused_ordering(223) 00:18:01.648 fused_ordering(224) 00:18:01.648 fused_ordering(225) 00:18:01.648 fused_ordering(226) 00:18:01.648 fused_ordering(227) 00:18:01.648 fused_ordering(228) 00:18:01.648 fused_ordering(229) 00:18:01.648 fused_ordering(230) 00:18:01.648 fused_ordering(231) 00:18:01.648 fused_ordering(232) 00:18:01.648 fused_ordering(233) 00:18:01.648 fused_ordering(234) 00:18:01.648 fused_ordering(235) 00:18:01.648 fused_ordering(236) 00:18:01.648 fused_ordering(237) 00:18:01.648 fused_ordering(238) 00:18:01.648 fused_ordering(239) 00:18:01.648 fused_ordering(240) 00:18:01.648 fused_ordering(241) 00:18:01.648 fused_ordering(242) 00:18:01.648 fused_ordering(243) 00:18:01.648 fused_ordering(244) 00:18:01.648 fused_ordering(245) 00:18:01.648 fused_ordering(246) 00:18:01.648 fused_ordering(247) 00:18:01.648 fused_ordering(248) 00:18:01.648 fused_ordering(249) 00:18:01.648 fused_ordering(250) 00:18:01.648 fused_ordering(251) 00:18:01.648 fused_ordering(252) 00:18:01.648 fused_ordering(253) 00:18:01.648 fused_ordering(254) 00:18:01.648 fused_ordering(255) 00:18:01.648 fused_ordering(256) 00:18:01.648 fused_ordering(257) 00:18:01.648 fused_ordering(258) 00:18:01.648 fused_ordering(259) 00:18:01.648 fused_ordering(260) 00:18:01.648 fused_ordering(261) 00:18:01.648 fused_ordering(262) 00:18:01.648 fused_ordering(263) 00:18:01.648 fused_ordering(264) 00:18:01.648 fused_ordering(265) 00:18:01.648 fused_ordering(266) 00:18:01.648 fused_ordering(267) 00:18:01.648 fused_ordering(268) 00:18:01.648 fused_ordering(269) 00:18:01.648 fused_ordering(270) 00:18:01.648 fused_ordering(271) 00:18:01.648 fused_ordering(272) 00:18:01.648 fused_ordering(273) 00:18:01.648 fused_ordering(274) 00:18:01.648 fused_ordering(275) 00:18:01.648 fused_ordering(276) 00:18:01.648 fused_ordering(277) 00:18:01.648 fused_ordering(278) 00:18:01.648 fused_ordering(279) 00:18:01.648 fused_ordering(280) 00:18:01.648 fused_ordering(281) 00:18:01.648 fused_ordering(282) 00:18:01.648 fused_ordering(283) 00:18:01.648 fused_ordering(284) 00:18:01.648 fused_ordering(285) 00:18:01.648 fused_ordering(286) 00:18:01.648 fused_ordering(287) 00:18:01.648 fused_ordering(288) 00:18:01.648 fused_ordering(289) 00:18:01.648 fused_ordering(290) 00:18:01.648 fused_ordering(291) 00:18:01.648 fused_ordering(292) 00:18:01.648 fused_ordering(293) 00:18:01.648 fused_ordering(294) 00:18:01.648 fused_ordering(295) 00:18:01.648 fused_ordering(296) 00:18:01.648 fused_ordering(297) 00:18:01.648 fused_ordering(298) 00:18:01.648 fused_ordering(299) 00:18:01.648 fused_ordering(300) 00:18:01.648 fused_ordering(301) 00:18:01.648 fused_ordering(302) 00:18:01.648 fused_ordering(303) 00:18:01.648 fused_ordering(304) 00:18:01.648 fused_ordering(305) 00:18:01.648 fused_ordering(306) 00:18:01.648 fused_ordering(307) 00:18:01.648 fused_ordering(308) 00:18:01.648 fused_ordering(309) 00:18:01.648 fused_ordering(310) 00:18:01.648 fused_ordering(311) 00:18:01.648 fused_ordering(312) 00:18:01.648 fused_ordering(313) 00:18:01.648 fused_ordering(314) 00:18:01.648 fused_ordering(315) 00:18:01.648 fused_ordering(316) 00:18:01.648 fused_ordering(317) 00:18:01.648 fused_ordering(318) 00:18:01.648 fused_ordering(319) 00:18:01.649 fused_ordering(320) 00:18:01.649 fused_ordering(321) 00:18:01.649 fused_ordering(322) 00:18:01.649 fused_ordering(323) 00:18:01.649 fused_ordering(324) 00:18:01.649 fused_ordering(325) 00:18:01.649 fused_ordering(326) 00:18:01.649 fused_ordering(327) 00:18:01.649 fused_ordering(328) 00:18:01.649 fused_ordering(329) 00:18:01.649 fused_ordering(330) 00:18:01.649 fused_ordering(331) 00:18:01.649 fused_ordering(332) 00:18:01.649 fused_ordering(333) 00:18:01.649 fused_ordering(334) 00:18:01.649 fused_ordering(335) 00:18:01.649 fused_ordering(336) 00:18:01.649 fused_ordering(337) 00:18:01.649 fused_ordering(338) 00:18:01.649 fused_ordering(339) 00:18:01.649 fused_ordering(340) 00:18:01.649 fused_ordering(341) 00:18:01.649 fused_ordering(342) 00:18:01.649 fused_ordering(343) 00:18:01.649 fused_ordering(344) 00:18:01.649 fused_ordering(345) 00:18:01.649 fused_ordering(346) 00:18:01.649 fused_ordering(347) 00:18:01.649 fused_ordering(348) 00:18:01.649 fused_ordering(349) 00:18:01.649 fused_ordering(350) 00:18:01.649 fused_ordering(351) 00:18:01.649 fused_ordering(352) 00:18:01.649 fused_ordering(353) 00:18:01.649 fused_ordering(354) 00:18:01.649 fused_ordering(355) 00:18:01.649 fused_ordering(356) 00:18:01.649 fused_ordering(357) 00:18:01.649 fused_ordering(358) 00:18:01.649 fused_ordering(359) 00:18:01.649 fused_ordering(360) 00:18:01.649 fused_ordering(361) 00:18:01.649 fused_ordering(362) 00:18:01.649 fused_ordering(363) 00:18:01.649 fused_ordering(364) 00:18:01.649 fused_ordering(365) 00:18:01.649 fused_ordering(366) 00:18:01.649 fused_ordering(367) 00:18:01.649 fused_ordering(368) 00:18:01.649 fused_ordering(369) 00:18:01.649 fused_ordering(370) 00:18:01.649 fused_ordering(371) 00:18:01.649 fused_ordering(372) 00:18:01.649 fused_ordering(373) 00:18:01.649 fused_ordering(374) 00:18:01.649 fused_ordering(375) 00:18:01.649 fused_ordering(376) 00:18:01.649 fused_ordering(377) 00:18:01.649 fused_ordering(378) 00:18:01.649 fused_ordering(379) 00:18:01.649 fused_ordering(380) 00:18:01.649 fused_ordering(381) 00:18:01.649 fused_ordering(382) 00:18:01.649 fused_ordering(383) 00:18:01.649 fused_ordering(384) 00:18:01.649 fused_ordering(385) 00:18:01.649 fused_ordering(386) 00:18:01.649 fused_ordering(387) 00:18:01.649 fused_ordering(388) 00:18:01.649 fused_ordering(389) 00:18:01.649 fused_ordering(390) 00:18:01.649 fused_ordering(391) 00:18:01.649 fused_ordering(392) 00:18:01.649 fused_ordering(393) 00:18:01.649 fused_ordering(394) 00:18:01.649 fused_ordering(395) 00:18:01.649 fused_ordering(396) 00:18:01.649 fused_ordering(397) 00:18:01.649 fused_ordering(398) 00:18:01.649 fused_ordering(399) 00:18:01.649 fused_ordering(400) 00:18:01.649 fused_ordering(401) 00:18:01.649 fused_ordering(402) 00:18:01.649 fused_ordering(403) 00:18:01.649 fused_ordering(404) 00:18:01.649 fused_ordering(405) 00:18:01.649 fused_ordering(406) 00:18:01.649 fused_ordering(407) 00:18:01.649 fused_ordering(408) 00:18:01.649 fused_ordering(409) 00:18:01.649 fused_ordering(410) 00:18:02.218 fused_ordering(411) 00:18:02.218 fused_ordering(412) 00:18:02.218 fused_ordering(413) 00:18:02.218 fused_ordering(414) 00:18:02.218 fused_ordering(415) 00:18:02.218 fused_ordering(416) 00:18:02.218 fused_ordering(417) 00:18:02.218 fused_ordering(418) 00:18:02.219 fused_ordering(419) 00:18:02.219 fused_ordering(420) 00:18:02.219 fused_ordering(421) 00:18:02.219 fused_ordering(422) 00:18:02.219 fused_ordering(423) 00:18:02.219 fused_ordering(424) 00:18:02.219 fused_ordering(425) 00:18:02.219 fused_ordering(426) 00:18:02.219 fused_ordering(427) 00:18:02.219 fused_ordering(428) 00:18:02.219 fused_ordering(429) 00:18:02.219 fused_ordering(430) 00:18:02.219 fused_ordering(431) 00:18:02.219 fused_ordering(432) 00:18:02.219 fused_ordering(433) 00:18:02.219 fused_ordering(434) 00:18:02.219 fused_ordering(435) 00:18:02.219 fused_ordering(436) 00:18:02.219 fused_ordering(437) 00:18:02.219 fused_ordering(438) 00:18:02.219 fused_ordering(439) 00:18:02.219 fused_ordering(440) 00:18:02.219 fused_ordering(441) 00:18:02.219 fused_ordering(442) 00:18:02.219 fused_ordering(443) 00:18:02.219 fused_ordering(444) 00:18:02.219 fused_ordering(445) 00:18:02.219 fused_ordering(446) 00:18:02.219 fused_ordering(447) 00:18:02.219 fused_ordering(448) 00:18:02.219 fused_ordering(449) 00:18:02.219 fused_ordering(450) 00:18:02.219 fused_ordering(451) 00:18:02.219 fused_ordering(452) 00:18:02.219 fused_ordering(453) 00:18:02.219 fused_ordering(454) 00:18:02.219 fused_ordering(455) 00:18:02.219 fused_ordering(456) 00:18:02.219 fused_ordering(457) 00:18:02.219 fused_ordering(458) 00:18:02.219 fused_ordering(459) 00:18:02.219 fused_ordering(460) 00:18:02.219 fused_ordering(461) 00:18:02.219 fused_ordering(462) 00:18:02.219 fused_ordering(463) 00:18:02.219 fused_ordering(464) 00:18:02.219 fused_ordering(465) 00:18:02.219 fused_ordering(466) 00:18:02.219 fused_ordering(467) 00:18:02.219 fused_ordering(468) 00:18:02.219 fused_ordering(469) 00:18:02.219 fused_ordering(470) 00:18:02.219 fused_ordering(471) 00:18:02.219 fused_ordering(472) 00:18:02.219 fused_ordering(473) 00:18:02.219 fused_ordering(474) 00:18:02.219 fused_ordering(475) 00:18:02.219 fused_ordering(476) 00:18:02.219 fused_ordering(477) 00:18:02.219 fused_ordering(478) 00:18:02.219 fused_ordering(479) 00:18:02.219 fused_ordering(480) 00:18:02.219 fused_ordering(481) 00:18:02.219 fused_ordering(482) 00:18:02.219 fused_ordering(483) 00:18:02.219 fused_ordering(484) 00:18:02.219 fused_ordering(485) 00:18:02.219 fused_ordering(486) 00:18:02.219 fused_ordering(487) 00:18:02.219 fused_ordering(488) 00:18:02.219 fused_ordering(489) 00:18:02.219 fused_ordering(490) 00:18:02.219 fused_ordering(491) 00:18:02.219 fused_ordering(492) 00:18:02.219 fused_ordering(493) 00:18:02.219 fused_ordering(494) 00:18:02.219 fused_ordering(495) 00:18:02.219 fused_ordering(496) 00:18:02.219 fused_ordering(497) 00:18:02.219 fused_ordering(498) 00:18:02.219 fused_ordering(499) 00:18:02.219 fused_ordering(500) 00:18:02.219 fused_ordering(501) 00:18:02.219 fused_ordering(502) 00:18:02.219 fused_ordering(503) 00:18:02.219 fused_ordering(504) 00:18:02.219 fused_ordering(505) 00:18:02.219 fused_ordering(506) 00:18:02.219 fused_ordering(507) 00:18:02.219 fused_ordering(508) 00:18:02.219 fused_ordering(509) 00:18:02.219 fused_ordering(510) 00:18:02.219 fused_ordering(511) 00:18:02.219 fused_ordering(512) 00:18:02.219 fused_ordering(513) 00:18:02.219 fused_ordering(514) 00:18:02.219 fused_ordering(515) 00:18:02.219 fused_ordering(516) 00:18:02.219 fused_ordering(517) 00:18:02.219 fused_ordering(518) 00:18:02.219 fused_ordering(519) 00:18:02.219 fused_ordering(520) 00:18:02.219 fused_ordering(521) 00:18:02.219 fused_ordering(522) 00:18:02.219 fused_ordering(523) 00:18:02.219 fused_ordering(524) 00:18:02.219 fused_ordering(525) 00:18:02.219 fused_ordering(526) 00:18:02.219 fused_ordering(527) 00:18:02.219 fused_ordering(528) 00:18:02.219 fused_ordering(529) 00:18:02.219 fused_ordering(530) 00:18:02.219 fused_ordering(531) 00:18:02.219 fused_ordering(532) 00:18:02.219 fused_ordering(533) 00:18:02.219 fused_ordering(534) 00:18:02.219 fused_ordering(535) 00:18:02.219 fused_ordering(536) 00:18:02.219 fused_ordering(537) 00:18:02.219 fused_ordering(538) 00:18:02.219 fused_ordering(539) 00:18:02.219 fused_ordering(540) 00:18:02.219 fused_ordering(541) 00:18:02.219 fused_ordering(542) 00:18:02.219 fused_ordering(543) 00:18:02.219 fused_ordering(544) 00:18:02.219 fused_ordering(545) 00:18:02.219 fused_ordering(546) 00:18:02.219 fused_ordering(547) 00:18:02.219 fused_ordering(548) 00:18:02.219 fused_ordering(549) 00:18:02.219 fused_ordering(550) 00:18:02.219 fused_ordering(551) 00:18:02.219 fused_ordering(552) 00:18:02.219 fused_ordering(553) 00:18:02.219 fused_ordering(554) 00:18:02.219 fused_ordering(555) 00:18:02.219 fused_ordering(556) 00:18:02.219 fused_ordering(557) 00:18:02.219 fused_ordering(558) 00:18:02.219 fused_ordering(559) 00:18:02.219 fused_ordering(560) 00:18:02.219 fused_ordering(561) 00:18:02.219 fused_ordering(562) 00:18:02.219 fused_ordering(563) 00:18:02.219 fused_ordering(564) 00:18:02.219 fused_ordering(565) 00:18:02.219 fused_ordering(566) 00:18:02.219 fused_ordering(567) 00:18:02.219 fused_ordering(568) 00:18:02.219 fused_ordering(569) 00:18:02.219 fused_ordering(570) 00:18:02.219 fused_ordering(571) 00:18:02.219 fused_ordering(572) 00:18:02.219 fused_ordering(573) 00:18:02.219 fused_ordering(574) 00:18:02.219 fused_ordering(575) 00:18:02.219 fused_ordering(576) 00:18:02.219 fused_ordering(577) 00:18:02.219 fused_ordering(578) 00:18:02.219 fused_ordering(579) 00:18:02.219 fused_ordering(580) 00:18:02.219 fused_ordering(581) 00:18:02.219 fused_ordering(582) 00:18:02.219 fused_ordering(583) 00:18:02.219 fused_ordering(584) 00:18:02.219 fused_ordering(585) 00:18:02.219 fused_ordering(586) 00:18:02.219 fused_ordering(587) 00:18:02.219 fused_ordering(588) 00:18:02.219 fused_ordering(589) 00:18:02.219 fused_ordering(590) 00:18:02.219 fused_ordering(591) 00:18:02.219 fused_ordering(592) 00:18:02.219 fused_ordering(593) 00:18:02.219 fused_ordering(594) 00:18:02.219 fused_ordering(595) 00:18:02.219 fused_ordering(596) 00:18:02.219 fused_ordering(597) 00:18:02.219 fused_ordering(598) 00:18:02.219 fused_ordering(599) 00:18:02.219 fused_ordering(600) 00:18:02.219 fused_ordering(601) 00:18:02.219 fused_ordering(602) 00:18:02.219 fused_ordering(603) 00:18:02.219 fused_ordering(604) 00:18:02.219 fused_ordering(605) 00:18:02.219 fused_ordering(606) 00:18:02.219 fused_ordering(607) 00:18:02.219 fused_ordering(608) 00:18:02.219 fused_ordering(609) 00:18:02.219 fused_ordering(610) 00:18:02.219 fused_ordering(611) 00:18:02.219 fused_ordering(612) 00:18:02.219 fused_ordering(613) 00:18:02.219 fused_ordering(614) 00:18:02.219 fused_ordering(615) 00:18:02.479 fused_ordering(616) 00:18:02.479 fused_ordering(617) 00:18:02.479 fused_ordering(618) 00:18:02.479 fused_ordering(619) 00:18:02.479 fused_ordering(620) 00:18:02.479 fused_ordering(621) 00:18:02.479 fused_ordering(622) 00:18:02.479 fused_ordering(623) 00:18:02.479 fused_ordering(624) 00:18:02.479 fused_ordering(625) 00:18:02.479 fused_ordering(626) 00:18:02.479 fused_ordering(627) 00:18:02.479 fused_ordering(628) 00:18:02.479 fused_ordering(629) 00:18:02.479 fused_ordering(630) 00:18:02.479 fused_ordering(631) 00:18:02.479 fused_ordering(632) 00:18:02.479 fused_ordering(633) 00:18:02.479 fused_ordering(634) 00:18:02.479 fused_ordering(635) 00:18:02.479 fused_ordering(636) 00:18:02.479 fused_ordering(637) 00:18:02.479 fused_ordering(638) 00:18:02.479 fused_ordering(639) 00:18:02.479 fused_ordering(640) 00:18:02.479 fused_ordering(641) 00:18:02.479 fused_ordering(642) 00:18:02.479 fused_ordering(643) 00:18:02.479 fused_ordering(644) 00:18:02.479 fused_ordering(645) 00:18:02.479 fused_ordering(646) 00:18:02.479 fused_ordering(647) 00:18:02.479 fused_ordering(648) 00:18:02.479 fused_ordering(649) 00:18:02.479 fused_ordering(650) 00:18:02.479 fused_ordering(651) 00:18:02.479 fused_ordering(652) 00:18:02.479 fused_ordering(653) 00:18:02.479 fused_ordering(654) 00:18:02.479 fused_ordering(655) 00:18:02.479 fused_ordering(656) 00:18:02.479 fused_ordering(657) 00:18:02.479 fused_ordering(658) 00:18:02.479 fused_ordering(659) 00:18:02.479 fused_ordering(660) 00:18:02.479 fused_ordering(661) 00:18:02.479 fused_ordering(662) 00:18:02.479 fused_ordering(663) 00:18:02.479 fused_ordering(664) 00:18:02.479 fused_ordering(665) 00:18:02.479 fused_ordering(666) 00:18:02.479 fused_ordering(667) 00:18:02.479 fused_ordering(668) 00:18:02.479 fused_ordering(669) 00:18:02.479 fused_ordering(670) 00:18:02.479 fused_ordering(671) 00:18:02.479 fused_ordering(672) 00:18:02.479 fused_ordering(673) 00:18:02.479 fused_ordering(674) 00:18:02.479 fused_ordering(675) 00:18:02.479 fused_ordering(676) 00:18:02.479 fused_ordering(677) 00:18:02.479 fused_ordering(678) 00:18:02.479 fused_ordering(679) 00:18:02.479 fused_ordering(680) 00:18:02.479 fused_ordering(681) 00:18:02.479 fused_ordering(682) 00:18:02.479 fused_ordering(683) 00:18:02.479 fused_ordering(684) 00:18:02.479 fused_ordering(685) 00:18:02.479 fused_ordering(686) 00:18:02.479 fused_ordering(687) 00:18:02.479 fused_ordering(688) 00:18:02.479 fused_ordering(689) 00:18:02.479 fused_ordering(690) 00:18:02.479 fused_ordering(691) 00:18:02.479 fused_ordering(692) 00:18:02.479 fused_ordering(693) 00:18:02.479 fused_ordering(694) 00:18:02.479 fused_ordering(695) 00:18:02.479 fused_ordering(696) 00:18:02.479 fused_ordering(697) 00:18:02.479 fused_ordering(698) 00:18:02.479 fused_ordering(699) 00:18:02.479 fused_ordering(700) 00:18:02.479 fused_ordering(701) 00:18:02.479 fused_ordering(702) 00:18:02.479 fused_ordering(703) 00:18:02.479 fused_ordering(704) 00:18:02.479 fused_ordering(705) 00:18:02.479 fused_ordering(706) 00:18:02.479 fused_ordering(707) 00:18:02.479 fused_ordering(708) 00:18:02.479 fused_ordering(709) 00:18:02.479 fused_ordering(710) 00:18:02.479 fused_ordering(711) 00:18:02.479 fused_ordering(712) 00:18:02.479 fused_ordering(713) 00:18:02.479 fused_ordering(714) 00:18:02.479 fused_ordering(715) 00:18:02.479 fused_ordering(716) 00:18:02.479 fused_ordering(717) 00:18:02.479 fused_ordering(718) 00:18:02.479 fused_ordering(719) 00:18:02.479 fused_ordering(720) 00:18:02.479 fused_ordering(721) 00:18:02.479 fused_ordering(722) 00:18:02.479 fused_ordering(723) 00:18:02.479 fused_ordering(724) 00:18:02.480 fused_ordering(725) 00:18:02.480 fused_ordering(726) 00:18:02.480 fused_ordering(727) 00:18:02.480 fused_ordering(728) 00:18:02.480 fused_ordering(729) 00:18:02.480 fused_ordering(730) 00:18:02.480 fused_ordering(731) 00:18:02.480 fused_ordering(732) 00:18:02.480 fused_ordering(733) 00:18:02.480 fused_ordering(734) 00:18:02.480 fused_ordering(735) 00:18:02.480 fused_ordering(736) 00:18:02.480 fused_ordering(737) 00:18:02.480 fused_ordering(738) 00:18:02.480 fused_ordering(739) 00:18:02.480 fused_ordering(740) 00:18:02.480 fused_ordering(741) 00:18:02.480 fused_ordering(742) 00:18:02.480 fused_ordering(743) 00:18:02.480 fused_ordering(744) 00:18:02.480 fused_ordering(745) 00:18:02.480 fused_ordering(746) 00:18:02.480 fused_ordering(747) 00:18:02.480 fused_ordering(748) 00:18:02.480 fused_ordering(749) 00:18:02.480 fused_ordering(750) 00:18:02.480 fused_ordering(751) 00:18:02.480 fused_ordering(752) 00:18:02.480 fused_ordering(753) 00:18:02.480 fused_ordering(754) 00:18:02.480 fused_ordering(755) 00:18:02.480 fused_ordering(756) 00:18:02.480 fused_ordering(757) 00:18:02.480 fused_ordering(758) 00:18:02.480 fused_ordering(759) 00:18:02.480 fused_ordering(760) 00:18:02.480 fused_ordering(761) 00:18:02.480 fused_ordering(762) 00:18:02.480 fused_ordering(763) 00:18:02.480 fused_ordering(764) 00:18:02.480 fused_ordering(765) 00:18:02.480 fused_ordering(766) 00:18:02.480 fused_ordering(767) 00:18:02.480 fused_ordering(768) 00:18:02.480 fused_ordering(769) 00:18:02.480 fused_ordering(770) 00:18:02.480 fused_ordering(771) 00:18:02.480 fused_ordering(772) 00:18:02.480 fused_ordering(773) 00:18:02.480 fused_ordering(774) 00:18:02.480 fused_ordering(775) 00:18:02.480 fused_ordering(776) 00:18:02.480 fused_ordering(777) 00:18:02.480 fused_ordering(778) 00:18:02.480 fused_ordering(779) 00:18:02.480 fused_ordering(780) 00:18:02.480 fused_ordering(781) 00:18:02.480 fused_ordering(782) 00:18:02.480 fused_ordering(783) 00:18:02.480 fused_ordering(784) 00:18:02.480 fused_ordering(785) 00:18:02.480 fused_ordering(786) 00:18:02.480 fused_ordering(787) 00:18:02.480 fused_ordering(788) 00:18:02.480 fused_ordering(789) 00:18:02.480 fused_ordering(790) 00:18:02.480 fused_ordering(791) 00:18:02.480 fused_ordering(792) 00:18:02.480 fused_ordering(793) 00:18:02.480 fused_ordering(794) 00:18:02.480 fused_ordering(795) 00:18:02.480 fused_ordering(796) 00:18:02.480 fused_ordering(797) 00:18:02.480 fused_ordering(798) 00:18:02.480 fused_ordering(799) 00:18:02.480 fused_ordering(800) 00:18:02.480 fused_ordering(801) 00:18:02.480 fused_ordering(802) 00:18:02.480 fused_ordering(803) 00:18:02.480 fused_ordering(804) 00:18:02.480 fused_ordering(805) 00:18:02.480 fused_ordering(806) 00:18:02.480 fused_ordering(807) 00:18:02.480 fused_ordering(808) 00:18:02.480 fused_ordering(809) 00:18:02.480 fused_ordering(810) 00:18:02.480 fused_ordering(811) 00:18:02.480 fused_ordering(812) 00:18:02.480 fused_ordering(813) 00:18:02.480 fused_ordering(814) 00:18:02.480 fused_ordering(815) 00:18:02.480 fused_ordering(816) 00:18:02.480 fused_ordering(817) 00:18:02.480 fused_ordering(818) 00:18:02.480 fused_ordering(819) 00:18:02.480 fused_ordering(820) 00:18:03.049 fused_ordering(821) 00:18:03.049 fused_ordering(822) 00:18:03.049 fused_ordering(823) 00:18:03.049 fused_ordering(824) 00:18:03.049 fused_ordering(825) 00:18:03.049 fused_ordering(826) 00:18:03.049 fused_ordering(827) 00:18:03.049 fused_ordering(828) 00:18:03.049 fused_ordering(829) 00:18:03.049 fused_ordering(830) 00:18:03.049 fused_ordering(831) 00:18:03.049 fused_ordering(832) 00:18:03.049 fused_ordering(833) 00:18:03.049 fused_ordering(834) 00:18:03.049 fused_ordering(835) 00:18:03.049 fused_ordering(836) 00:18:03.049 fused_ordering(837) 00:18:03.049 fused_ordering(838) 00:18:03.049 fused_ordering(839) 00:18:03.049 fused_ordering(840) 00:18:03.049 fused_ordering(841) 00:18:03.049 fused_ordering(842) 00:18:03.049 fused_ordering(843) 00:18:03.049 fused_ordering(844) 00:18:03.049 fused_ordering(845) 00:18:03.049 fused_ordering(846) 00:18:03.049 fused_ordering(847) 00:18:03.049 fused_ordering(848) 00:18:03.049 fused_ordering(849) 00:18:03.049 fused_ordering(850) 00:18:03.049 fused_ordering(851) 00:18:03.049 fused_ordering(852) 00:18:03.049 fused_ordering(853) 00:18:03.049 fused_ordering(854) 00:18:03.049 fused_ordering(855) 00:18:03.049 fused_ordering(856) 00:18:03.049 fused_ordering(857) 00:18:03.049 fused_ordering(858) 00:18:03.049 fused_ordering(859) 00:18:03.049 fused_ordering(860) 00:18:03.049 fused_ordering(861) 00:18:03.049 fused_ordering(862) 00:18:03.049 fused_ordering(863) 00:18:03.049 fused_ordering(864) 00:18:03.049 fused_ordering(865) 00:18:03.049 fused_ordering(866) 00:18:03.049 fused_ordering(867) 00:18:03.049 fused_ordering(868) 00:18:03.049 fused_ordering(869) 00:18:03.049 fused_ordering(870) 00:18:03.049 fused_ordering(871) 00:18:03.049 fused_ordering(872) 00:18:03.049 fused_ordering(873) 00:18:03.049 fused_ordering(874) 00:18:03.049 fused_ordering(875) 00:18:03.049 fused_ordering(876) 00:18:03.049 fused_ordering(877) 00:18:03.049 fused_ordering(878) 00:18:03.049 fused_ordering(879) 00:18:03.049 fused_ordering(880) 00:18:03.049 fused_ordering(881) 00:18:03.049 fused_ordering(882) 00:18:03.049 fused_ordering(883) 00:18:03.049 fused_ordering(884) 00:18:03.049 fused_ordering(885) 00:18:03.049 fused_ordering(886) 00:18:03.049 fused_ordering(887) 00:18:03.049 fused_ordering(888) 00:18:03.049 fused_ordering(889) 00:18:03.049 fused_ordering(890) 00:18:03.049 fused_ordering(891) 00:18:03.049 fused_ordering(892) 00:18:03.049 fused_ordering(893) 00:18:03.049 fused_ordering(894) 00:18:03.049 fused_ordering(895) 00:18:03.049 fused_ordering(896) 00:18:03.049 fused_ordering(897) 00:18:03.049 fused_ordering(898) 00:18:03.049 fused_ordering(899) 00:18:03.049 fused_ordering(900) 00:18:03.049 fused_ordering(901) 00:18:03.049 fused_ordering(902) 00:18:03.049 fused_ordering(903) 00:18:03.049 fused_ordering(904) 00:18:03.049 fused_ordering(905) 00:18:03.049 fused_ordering(906) 00:18:03.049 fused_ordering(907) 00:18:03.049 fused_ordering(908) 00:18:03.049 fused_ordering(909) 00:18:03.049 fused_ordering(910) 00:18:03.049 fused_ordering(911) 00:18:03.049 fused_ordering(912) 00:18:03.049 fused_ordering(913) 00:18:03.049 fused_ordering(914) 00:18:03.049 fused_ordering(915) 00:18:03.049 fused_ordering(916) 00:18:03.049 fused_ordering(917) 00:18:03.049 fused_ordering(918) 00:18:03.049 fused_ordering(919) 00:18:03.049 fused_ordering(920) 00:18:03.049 fused_ordering(921) 00:18:03.049 fused_ordering(922) 00:18:03.049 fused_ordering(923) 00:18:03.049 fused_ordering(924) 00:18:03.049 fused_ordering(925) 00:18:03.049 fused_ordering(926) 00:18:03.049 fused_ordering(927) 00:18:03.049 fused_ordering(928) 00:18:03.049 fused_ordering(929) 00:18:03.049 fused_ordering(930) 00:18:03.049 fused_ordering(931) 00:18:03.049 fused_ordering(932) 00:18:03.049 fused_ordering(933) 00:18:03.049 fused_ordering(934) 00:18:03.049 fused_ordering(935) 00:18:03.049 fused_ordering(936) 00:18:03.049 fused_ordering(937) 00:18:03.049 fused_ordering(938) 00:18:03.049 fused_ordering(939) 00:18:03.049 fused_ordering(940) 00:18:03.049 fused_ordering(941) 00:18:03.049 fused_ordering(942) 00:18:03.049 fused_ordering(943) 00:18:03.049 fused_ordering(944) 00:18:03.049 fused_ordering(945) 00:18:03.049 fused_ordering(946) 00:18:03.049 fused_ordering(947) 00:18:03.049 fused_ordering(948) 00:18:03.049 fused_ordering(949) 00:18:03.049 fused_ordering(950) 00:18:03.049 fused_ordering(951) 00:18:03.049 fused_ordering(952) 00:18:03.049 fused_ordering(953) 00:18:03.049 fused_ordering(954) 00:18:03.049 fused_ordering(955) 00:18:03.049 fused_ordering(956) 00:18:03.049 fused_ordering(957) 00:18:03.049 fused_ordering(958) 00:18:03.049 fused_ordering(959) 00:18:03.049 fused_ordering(960) 00:18:03.049 fused_ordering(961) 00:18:03.049 fused_ordering(962) 00:18:03.050 fused_ordering(963) 00:18:03.050 fused_ordering(964) 00:18:03.050 fused_ordering(965) 00:18:03.050 fused_ordering(966) 00:18:03.050 fused_ordering(967) 00:18:03.050 fused_ordering(968) 00:18:03.050 fused_ordering(969) 00:18:03.050 fused_ordering(970) 00:18:03.050 fused_ordering(971) 00:18:03.050 fused_ordering(972) 00:18:03.050 fused_ordering(973) 00:18:03.050 fused_ordering(974) 00:18:03.050 fused_ordering(975) 00:18:03.050 fused_ordering(976) 00:18:03.050 fused_ordering(977) 00:18:03.050 fused_ordering(978) 00:18:03.050 fused_ordering(979) 00:18:03.050 fused_ordering(980) 00:18:03.050 fused_ordering(981) 00:18:03.050 fused_ordering(982) 00:18:03.050 fused_ordering(983) 00:18:03.050 fused_ordering(984) 00:18:03.050 fused_ordering(985) 00:18:03.050 fused_ordering(986) 00:18:03.050 fused_ordering(987) 00:18:03.050 fused_ordering(988) 00:18:03.050 fused_ordering(989) 00:18:03.050 fused_ordering(990) 00:18:03.050 fused_ordering(991) 00:18:03.050 fused_ordering(992) 00:18:03.050 fused_ordering(993) 00:18:03.050 fused_ordering(994) 00:18:03.050 fused_ordering(995) 00:18:03.050 fused_ordering(996) 00:18:03.050 fused_ordering(997) 00:18:03.050 fused_ordering(998) 00:18:03.050 fused_ordering(999) 00:18:03.050 fused_ordering(1000) 00:18:03.050 fused_ordering(1001) 00:18:03.050 fused_ordering(1002) 00:18:03.050 fused_ordering(1003) 00:18:03.050 fused_ordering(1004) 00:18:03.050 fused_ordering(1005) 00:18:03.050 fused_ordering(1006) 00:18:03.050 fused_ordering(1007) 00:18:03.050 fused_ordering(1008) 00:18:03.050 fused_ordering(1009) 00:18:03.050 fused_ordering(1010) 00:18:03.050 fused_ordering(1011) 00:18:03.050 fused_ordering(1012) 00:18:03.050 fused_ordering(1013) 00:18:03.050 fused_ordering(1014) 00:18:03.050 fused_ordering(1015) 00:18:03.050 fused_ordering(1016) 00:18:03.050 fused_ordering(1017) 00:18:03.050 fused_ordering(1018) 00:18:03.050 fused_ordering(1019) 00:18:03.050 fused_ordering(1020) 00:18:03.050 fused_ordering(1021) 00:18:03.050 fused_ordering(1022) 00:18:03.050 fused_ordering(1023) 00:18:03.050 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:03.050 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:03.050 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:03.050 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:03.050 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.050 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:03.050 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.050 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.050 rmmod nvme_tcp 00:18:03.050 rmmod nvme_fabrics 00:18:03.050 rmmod nvme_keyring 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1817773 ']' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1817773 ']' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1817773' 00:18:03.310 killing process with pid 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1817773 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.310 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:05.853 00:18:05.853 real 0m13.515s 00:18:05.853 user 0m7.015s 00:18:05.853 sys 0m7.022s 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:05.853 ************************************ 00:18:05.853 END TEST nvmf_fused_ordering 00:18:05.853 ************************************ 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.853 10:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:05.854 ************************************ 00:18:05.854 START TEST nvmf_ns_masking 00:18:05.854 ************************************ 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:05.854 * Looking for test storage... 00:18:05.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.854 --rc genhtml_branch_coverage=1 00:18:05.854 --rc genhtml_function_coverage=1 00:18:05.854 --rc genhtml_legend=1 00:18:05.854 --rc geninfo_all_blocks=1 00:18:05.854 --rc geninfo_unexecuted_blocks=1 00:18:05.854 00:18:05.854 ' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.854 --rc genhtml_branch_coverage=1 00:18:05.854 --rc genhtml_function_coverage=1 00:18:05.854 --rc genhtml_legend=1 00:18:05.854 --rc geninfo_all_blocks=1 00:18:05.854 --rc geninfo_unexecuted_blocks=1 00:18:05.854 00:18:05.854 ' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.854 --rc genhtml_branch_coverage=1 00:18:05.854 --rc genhtml_function_coverage=1 00:18:05.854 --rc genhtml_legend=1 00:18:05.854 --rc geninfo_all_blocks=1 00:18:05.854 --rc geninfo_unexecuted_blocks=1 00:18:05.854 00:18:05.854 ' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:05.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.854 --rc genhtml_branch_coverage=1 00:18:05.854 --rc genhtml_function_coverage=1 00:18:05.854 --rc genhtml_legend=1 00:18:05.854 --rc geninfo_all_blocks=1 00:18:05.854 --rc geninfo_unexecuted_blocks=1 00:18:05.854 00:18:05.854 ' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.854 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e7c85df0-b1be-4d4f-b3a6-4495f0bd3f76 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7a9d13f6-ea4f-482c-a194-4783b4069cbf 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f55b2f44-3b98-4bab-a3d2-77ccfb844c9e 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.855 10:58:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:13.994 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:13.994 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:13.994 Found net devices under 0000:31:00.0: cvl_0_0 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:13.994 Found net devices under 0000:31:00.1: cvl_0_1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:13.994 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:13.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:13.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:18:13.994 00:18:13.994 --- 10.0.0.2 ping statistics --- 00:18:13.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.995 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:18:13.995 00:18:13.995 --- 10.0.0.1 ping statistics --- 00:18:13.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.995 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1822577 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1822577 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1822577 ']' 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:13.995 [2024-10-09 10:58:33.179385] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:13.995 [2024-10-09 10:58:33.179451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.995 [2024-10-09 10:58:33.320830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:13.995 [2024-10-09 10:58:33.353753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.995 [2024-10-09 10:58:33.375613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.995 [2024-10-09 10:58:33.375651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.995 [2024-10-09 10:58:33.375660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.995 [2024-10-09 10:58:33.375670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.995 [2024-10-09 10:58:33.375678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.995 [2024-10-09 10:58:33.376343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.995 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:14.255 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.255 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:14.255 [2024-10-09 10:58:34.164644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.255 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:14.255 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:14.255 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:14.515 Malloc1 00:18:14.515 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:14.775 Malloc2 00:18:14.775 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:14.775 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:15.035 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.295 [2024-10-09 10:58:35.091681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.295 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:15.295 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f55b2f44-3b98-4bab-a3d2-77ccfb844c9e -a 10.0.0.2 -s 4420 -i 4 00:18:15.556 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:15.556 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:15.556 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.556 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:15.556 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.466 [ 0]:0x1 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.466 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daf256c422d64150835b59da84a0173f 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daf256c422d64150835b59da84a0173f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:17.726 [ 0]:0x1 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:17.726 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daf256c422d64150835b59da84a0173f 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daf256c422d64150835b59da84a0173f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:17.986 [ 1]:0x2 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.986 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:18.246 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f55b2f44-3b98-4bab-a3d2-77ccfb844c9e -a 10.0.0.2 -s 4420 -i 4 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:18:18.506 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.045 [ 0]:0x2 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.045 [ 0]:0x1 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daf256c422d64150835b59da84a0173f 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daf256c422d64150835b59da84a0173f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.045 [ 1]:0x2 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.045 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:21.305 [ 0]:0x2 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:21.305 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:21.306 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:21.306 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:21.306 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:21.306 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.565 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:21.565 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:21.565 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f55b2f44-3b98-4bab-a3d2-77ccfb844c9e -a 10.0.0.2 -s 4420 -i 4 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:21.824 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:18:23.733 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:23.733 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:23.733 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.993 [ 0]:0x1 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daf256c422d64150835b59da84a0173f 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daf256c422d64150835b59da84a0173f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:23.993 [ 1]:0x2 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:23.993 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.254 [ 0]:0x2 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:24.254 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:24.515 [2024-10-09 10:58:44.364712] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:24.515 request: 00:18:24.515 { 00:18:24.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.515 "nsid": 2, 00:18:24.515 "host": "nqn.2016-06.io.spdk:host1", 00:18:24.515 "method": "nvmf_ns_remove_host", 00:18:24.515 "req_id": 1 00:18:24.515 } 00:18:24.515 Got JSON-RPC error response 00:18:24.515 response: 00:18:24.515 { 00:18:24.515 "code": -32602, 00:18:24.515 "message": "Invalid parameters" 00:18:24.515 } 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:24.515 [ 0]:0x2 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:24.515 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d590d6d98b2c4caeac18347938318c50 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d590d6d98b2c4caeac18347938318c50 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1825062 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1825062 /var/tmp/host.sock 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1825062 ']' 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:24.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.775 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:24.775 [2024-10-09 10:58:44.628306] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:24.775 [2024-10-09 10:58:44.628358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825062 ] 00:18:24.775 [2024-10-09 10:58:44.759287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:25.035 [2024-10-09 10:58:44.807683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.036 [2024-10-09 10:58:44.826141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.606 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.606 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:18:25.606 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.607 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:25.866 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e7c85df0-b1be-4d4f-b3a6-4495f0bd3f76 00:18:25.866 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:25.866 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E7C85DF0B1BE4D4FB3A64495F0BD3F76 -i 00:18:26.135 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7a9d13f6-ea4f-482c-a194-4783b4069cbf 00:18:26.135 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:18:26.135 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7A9D13F6EA4F482CA1944783B4069CBF -i 00:18:26.135 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:26.394 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:26.654 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:26.654 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:26.913 nvme0n1 00:18:26.913 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:26.913 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:27.172 nvme1n2 00:18:27.172 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:27.172 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:27.172 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:27.172 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:27.172 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:27.432 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:27.432 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:27.432 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:27.432 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e7c85df0-b1be-4d4f-b3a6-4495f0bd3f76 == \e\7\c\8\5\d\f\0\-\b\1\b\e\-\4\d\4\f\-\b\3\a\6\-\4\4\9\5\f\0\b\d\3\f\7\6 ]] 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7a9d13f6-ea4f-482c-a194-4783b4069cbf == \7\a\9\d\1\3\f\6\-\e\a\4\f\-\4\8\2\c\-\a\1\9\4\-\4\7\8\3\b\4\0\6\9\c\b\f ]] 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1825062 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1825062 ']' 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1825062 00:18:27.692 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825062 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825062' 00:18:27.693 killing process with pid 1825062 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1825062 00:18:27.693 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1825062 00:18:27.952 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.212 rmmod nvme_tcp 00:18:28.212 rmmod nvme_fabrics 00:18:28.212 rmmod nvme_keyring 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1822577 ']' 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1822577 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1822577 ']' 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1822577 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1822577 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1822577' 00:18:28.212 killing process with pid 1822577 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1822577 00:18:28.212 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1822577 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.471 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.472 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.472 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.472 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:31.010 00:18:31.010 real 0m24.981s 00:18:31.010 user 0m24.992s 00:18:31.010 sys 0m7.746s 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:31.010 ************************************ 00:18:31.010 END TEST nvmf_ns_masking 00:18:31.010 ************************************ 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.010 ************************************ 00:18:31.010 START TEST nvmf_nvme_cli 00:18:31.010 ************************************ 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:31.010 * Looking for test storage... 00:18:31.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:31.010 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:31.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.011 --rc genhtml_branch_coverage=1 00:18:31.011 --rc genhtml_function_coverage=1 00:18:31.011 --rc genhtml_legend=1 00:18:31.011 --rc geninfo_all_blocks=1 00:18:31.011 --rc geninfo_unexecuted_blocks=1 00:18:31.011 00:18:31.011 ' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:31.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.011 --rc genhtml_branch_coverage=1 00:18:31.011 --rc genhtml_function_coverage=1 00:18:31.011 --rc genhtml_legend=1 00:18:31.011 --rc geninfo_all_blocks=1 00:18:31.011 --rc geninfo_unexecuted_blocks=1 00:18:31.011 00:18:31.011 ' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:31.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.011 --rc genhtml_branch_coverage=1 00:18:31.011 --rc genhtml_function_coverage=1 00:18:31.011 --rc genhtml_legend=1 00:18:31.011 --rc geninfo_all_blocks=1 00:18:31.011 --rc geninfo_unexecuted_blocks=1 00:18:31.011 00:18:31.011 ' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:31.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.011 --rc genhtml_branch_coverage=1 00:18:31.011 --rc genhtml_function_coverage=1 00:18:31.011 --rc genhtml_legend=1 00:18:31.011 --rc geninfo_all_blocks=1 00:18:31.011 --rc geninfo_unexecuted_blocks=1 00:18:31.011 00:18:31.011 ' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.011 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:39.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:39.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:39.142 Found net devices under 0000:31:00.0: cvl_0_0 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:39.142 Found net devices under 0000:31:00.1: cvl_0_1 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.142 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.143 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:18:39.143 00:18:39.143 --- 10.0.0.2 ping statistics --- 00:18:39.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.143 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:18:39.143 00:18:39.143 --- 10.0.0.1 ping statistics --- 00:18:39.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.143 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1830163 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1830163 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1830163 ']' 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.143 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.143 [2024-10-09 10:58:58.397536] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:39.143 [2024-10-09 10:58:58.397585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.143 [2024-10-09 10:58:58.537257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.143 [2024-10-09 10:58:58.568795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.143 [2024-10-09 10:58:58.587747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.143 [2024-10-09 10:58:58.587780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.143 [2024-10-09 10:58:58.587788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.143 [2024-10-09 10:58:58.587794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.143 [2024-10-09 10:58:58.587800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.143 [2024-10-09 10:58:58.589498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.143 [2024-10-09 10:58:58.589692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.143 [2024-10-09 10:58:58.589899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.143 [2024-10-09 10:58:58.589900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.402 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.402 [2024-10-09 10:58:59.303576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 Malloc0 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 Malloc1 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.403 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 [2024-10-09 10:58:59.406352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:18:39.663 00:18:39.663 Discovery Log Number of Records 2, Generation counter 2 00:18:39.663 =====Discovery Log Entry 0====== 00:18:39.663 trtype: tcp 00:18:39.663 adrfam: ipv4 00:18:39.663 subtype: current discovery subsystem 00:18:39.663 treq: not required 00:18:39.663 portid: 0 00:18:39.663 trsvcid: 4420 00:18:39.663 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:39.663 traddr: 10.0.0.2 00:18:39.663 eflags: explicit discovery connections, duplicate discovery information 00:18:39.663 sectype: none 00:18:39.663 =====Discovery Log Entry 1====== 00:18:39.663 trtype: tcp 00:18:39.663 adrfam: ipv4 00:18:39.663 subtype: nvme subsystem 00:18:39.663 treq: not required 00:18:39.663 portid: 0 00:18:39.663 trsvcid: 4420 00:18:39.663 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:39.663 traddr: 10.0.0.2 00:18:39.663 eflags: none 00:18:39.663 sectype: none 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:39.663 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:18:41.571 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:43.479 /dev/nvme0n2 ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:43.479 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.739 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.739 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:18:43.739 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:43.739 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.998 rmmod nvme_tcp 00:18:43.998 rmmod nvme_fabrics 00:18:43.998 rmmod nvme_keyring 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1830163 ']' 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1830163 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1830163 ']' 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1830163 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1830163 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1830163' 00:18:43.998 killing process with pid 1830163 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1830163 00:18:43.998 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1830163 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.259 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.167 00:18:46.167 real 0m15.636s 00:18:46.167 user 0m24.005s 00:18:46.167 sys 0m6.419s 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:46.167 ************************************ 00:18:46.167 END TEST nvmf_nvme_cli 00:18:46.167 ************************************ 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.167 10:59:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.428 ************************************ 00:18:46.428 START TEST nvmf_vfio_user 00:18:46.428 ************************************ 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:46.428 * Looking for test storage... 00:18:46.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.428 --rc genhtml_branch_coverage=1 00:18:46.428 --rc genhtml_function_coverage=1 00:18:46.428 --rc genhtml_legend=1 00:18:46.428 --rc geninfo_all_blocks=1 00:18:46.428 --rc geninfo_unexecuted_blocks=1 00:18:46.428 00:18:46.428 ' 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.428 --rc genhtml_branch_coverage=1 00:18:46.428 --rc genhtml_function_coverage=1 00:18:46.428 --rc genhtml_legend=1 00:18:46.428 --rc geninfo_all_blocks=1 00:18:46.428 --rc geninfo_unexecuted_blocks=1 00:18:46.428 00:18:46.428 ' 00:18:46.428 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.428 --rc genhtml_branch_coverage=1 00:18:46.428 --rc genhtml_function_coverage=1 00:18:46.428 --rc genhtml_legend=1 00:18:46.429 --rc geninfo_all_blocks=1 00:18:46.429 --rc geninfo_unexecuted_blocks=1 00:18:46.429 00:18:46.429 ' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.429 --rc genhtml_branch_coverage=1 00:18:46.429 --rc genhtml_function_coverage=1 00:18:46.429 --rc genhtml_legend=1 00:18:46.429 --rc geninfo_all_blocks=1 00:18:46.429 --rc geninfo_unexecuted_blocks=1 00:18:46.429 00:18:46.429 ' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.429 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1831718 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1831718' 00:18:46.690 Process pid: 1831718 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1831718 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1831718 ']' 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.690 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:46.690 [2024-10-09 10:59:06.494266] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:46.690 [2024-10-09 10:59:06.494335] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.690 [2024-10-09 10:59:06.628913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:46.690 [2024-10-09 10:59:06.661514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.690 [2024-10-09 10:59:06.684696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.690 [2024-10-09 10:59:06.684737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.690 [2024-10-09 10:59:06.684745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.690 [2024-10-09 10:59:06.684753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.690 [2024-10-09 10:59:06.684759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.691 [2024-10-09 10:59:06.686506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.691 [2024-10-09 10:59:06.686717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.691 [2024-10-09 10:59:06.686717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.691 [2024-10-09 10:59:06.686571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.629 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.629 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:47.629 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:48.568 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:48.827 Malloc1 00:18:48.827 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:49.087 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:49.087 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:49.348 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:49.348 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:49.348 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:49.608 Malloc2 00:18:49.608 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:49.868 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:49.868 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:50.128 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:50.128 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:50.128 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:50.128 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:50.128 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:50.128 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:50.128 [2024-10-09 10:59:10.030181] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:18:50.128 [2024-10-09 10:59:10.030237] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832451 ] 00:18:50.390 [2024-10-09 10:59:10.142880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:50.390 [2024-10-09 10:59:10.162161] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:50.390 [2024-10-09 10:59:10.170691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.390 [2024-10-09 10:59:10.170711] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6be85a0000 00:18:50.390 [2024-10-09 10:59:10.171685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.172693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.173686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.174693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.175694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.176693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.177696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.178705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:50.390 [2024-10-09 10:59:10.179715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:50.390 [2024-10-09 10:59:10.179725] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6be72a3000 00:18:50.390 [2024-10-09 10:59:10.181055] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:50.390 [2024-10-09 10:59:10.201638] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:50.390 [2024-10-09 10:59:10.201671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:50.390 [2024-10-09 10:59:10.203780] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:50.390 [2024-10-09 10:59:10.203824] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:50.390 [2024-10-09 10:59:10.203910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:50.390 [2024-10-09 10:59:10.203928] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:50.390 [2024-10-09 10:59:10.203934] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:50.390 [2024-10-09 10:59:10.204774] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:50.390 [2024-10-09 10:59:10.204783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:50.390 [2024-10-09 10:59:10.204790] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:50.390 [2024-10-09 10:59:10.209474] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:50.390 [2024-10-09 10:59:10.209484] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:50.390 [2024-10-09 10:59:10.209491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.209784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:50.390 [2024-10-09 10:59:10.209793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.210790] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:50.390 [2024-10-09 10:59:10.210798] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:50.390 [2024-10-09 10:59:10.210803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.210810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.210915] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:50.390 [2024-10-09 10:59:10.210920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.210926] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003a0000 00:18:50.390 [2024-10-09 10:59:10.211792] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x20000039e000 00:18:50.390 [2024-10-09 10:59:10.212792] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:50.390 [2024-10-09 10:59:10.213796] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:50.390 [2024-10-09 10:59:10.214792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.390 [2024-10-09 10:59:10.214842] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:50.390 [2024-10-09 10:59:10.215803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:50.390 [2024-10-09 10:59:10.215812] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:50.390 [2024-10-09 10:59:10.215816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:50.390 [2024-10-09 10:59:10.215838] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:50.390 [2024-10-09 10:59:10.215851] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:50.390 [2024-10-09 10:59:10.215866] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:18:50.390 [2024-10-09 10:59:10.215871] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:18:50.390 [2024-10-09 10:59:10.215875] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.390 [2024-10-09 10:59:10.215888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:18:50.390 [2024-10-09 10:59:10.215921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:50.390 [2024-10-09 10:59:10.215931] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:50.390 [2024-10-09 10:59:10.215936] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:50.390 [2024-10-09 10:59:10.215940] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:50.390 [2024-10-09 10:59:10.215945] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:50.390 [2024-10-09 10:59:10.215950] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:50.390 [2024-10-09 10:59:10.215955] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:50.390 [2024-10-09 10:59:10.215960] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:50.390 [2024-10-09 10:59:10.215968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:50.390 [2024-10-09 10:59:10.215978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:50.390 [2024-10-09 10:59:10.215988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:50.390 [2024-10-09 10:59:10.215999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.390 [2024-10-09 10:59:10.216008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.391 [2024-10-09 10:59:10.216016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.391 [2024-10-09 10:59:10.216024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:50.391 [2024-10-09 10:59:10.216032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216063] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:50.391 [2024-10-09 10:59:10.216068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216180] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d9000 len:4096 00:18:50.391 [2024-10-09 10:59:10.216184] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d9000 00:18:50.391 [2024-10-09 10:59:10.216188] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002d9000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216213] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:50.391 [2024-10-09 10:59:10.216221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216229] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216236] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:18:50.391 [2024-10-09 10:59:10.216240] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:18:50.391 [2024-10-09 10:59:10.216244] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216295] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:18:50.391 [2024-10-09 10:59:10.216299] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:18:50.391 [2024-10-09 10:59:10.216303] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216360] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216365] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:50.391 [2024-10-09 10:59:10.216370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:50.391 [2024-10-09 10:59:10.216375] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:50.391 [2024-10-09 10:59:10.216392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216484] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:8192 00:18:50.391 [2024-10-09 10:59:10.216488] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:18:50.391 [2024-10-09 10:59:10.216492] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d7000 00:18:50.391 [2024-10-09 10:59:10.216497] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d7000 00:18:50.391 [2024-10-09 10:59:10.216501] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:50.391 [2024-10-09 10:59:10.216507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x2000002d7000 00:18:50.391 [2024-10-09 10:59:10.216515] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dc000 len:512 00:18:50.391 [2024-10-09 10:59:10.216519] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dc000 00:18:50.391 [2024-10-09 10:59:10.216522] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002dc000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216535] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:512 00:18:50.391 [2024-10-09 10:59:10.216540] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:18:50.391 [2024-10-09 10:59:10.216543] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216557] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d4000 len:4096 00:18:50.391 [2024-10-09 10:59:10.216561] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d4000 00:18:50.391 [2024-10-09 10:59:10.216564] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:50.391 [2024-10-09 10:59:10.216570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d4000 PRP2 0x0 00:18:50.391 [2024-10-09 10:59:10.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:50.391 [2024-10-09 10:59:10.216607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:50.391 ===================================================== 00:18:50.391 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:50.391 ===================================================== 00:18:50.391 Controller Capabilities/Features 00:18:50.391 ================================ 00:18:50.391 Vendor ID: 4e58 00:18:50.391 Subsystem Vendor ID: 4e58 00:18:50.391 Serial Number: SPDK1 00:18:50.391 Model Number: SPDK bdev Controller 00:18:50.391 Firmware Version: 25.01 00:18:50.391 Recommended Arb Burst: 6 00:18:50.391 IEEE OUI Identifier: 8d 6b 50 00:18:50.391 Multi-path I/O 00:18:50.391 May have multiple subsystem ports: Yes 00:18:50.391 May have multiple controllers: Yes 00:18:50.391 Associated with SR-IOV VF: No 00:18:50.391 Max Data Transfer Size: 131072 00:18:50.391 Max Number of Namespaces: 32 00:18:50.391 Max Number of I/O Queues: 127 00:18:50.391 NVMe Specification Version (VS): 1.3 00:18:50.391 NVMe Specification Version (Identify): 1.3 00:18:50.391 Maximum Queue Entries: 256 00:18:50.391 Contiguous Queues Required: Yes 00:18:50.391 Arbitration Mechanisms Supported 00:18:50.391 Weighted Round Robin: Not Supported 00:18:50.391 Vendor Specific: Not Supported 00:18:50.391 Reset Timeout: 15000 ms 00:18:50.391 Doorbell Stride: 4 bytes 00:18:50.391 NVM Subsystem Reset: Not Supported 00:18:50.391 Command Sets Supported 00:18:50.391 NVM Command Set: Supported 00:18:50.391 Boot Partition: Not Supported 00:18:50.391 Memory Page Size Minimum: 4096 bytes 00:18:50.392 Memory Page Size Maximum: 4096 bytes 00:18:50.392 Persistent Memory Region: Not Supported 00:18:50.392 Optional Asynchronous Events Supported 00:18:50.392 Namespace Attribute Notices: Supported 00:18:50.392 Firmware Activation Notices: Not Supported 00:18:50.392 ANA Change Notices: Not Supported 00:18:50.392 PLE Aggregate Log Change Notices: Not Supported 00:18:50.392 LBA Status Info Alert Notices: Not Supported 00:18:50.392 EGE Aggregate Log Change Notices: Not Supported 00:18:50.392 Normal NVM Subsystem Shutdown event: Not Supported 00:18:50.392 Zone Descriptor Change Notices: Not Supported 00:18:50.392 Discovery Log Change Notices: Not Supported 00:18:50.392 Controller Attributes 00:18:50.392 128-bit Host Identifier: Supported 00:18:50.392 Non-Operational Permissive Mode: Not Supported 00:18:50.392 NVM Sets: Not Supported 00:18:50.392 Read Recovery Levels: Not Supported 00:18:50.392 Endurance Groups: Not Supported 00:18:50.392 Predictable Latency Mode: Not Supported 00:18:50.392 Traffic Based Keep ALive: Not Supported 00:18:50.392 Namespace Granularity: Not Supported 00:18:50.392 SQ Associations: Not Supported 00:18:50.392 UUID List: Not Supported 00:18:50.392 Multi-Domain Subsystem: Not Supported 00:18:50.392 Fixed Capacity Management: Not Supported 00:18:50.392 Variable Capacity Management: Not Supported 00:18:50.392 Delete Endurance Group: Not Supported 00:18:50.392 Delete NVM Set: Not Supported 00:18:50.392 Extended LBA Formats Supported: Not Supported 00:18:50.392 Flexible Data Placement Supported: Not Supported 00:18:50.392 00:18:50.392 Controller Memory Buffer Support 00:18:50.392 ================================ 00:18:50.392 Supported: No 00:18:50.392 00:18:50.392 Persistent Memory Region Support 00:18:50.392 ================================ 00:18:50.392 Supported: No 00:18:50.392 00:18:50.392 Admin Command Set Attributes 00:18:50.392 ============================ 00:18:50.392 Security Send/Receive: Not Supported 00:18:50.392 Format NVM: Not Supported 00:18:50.392 Firmware Activate/Download: Not Supported 00:18:50.392 Namespace Management: Not Supported 00:18:50.392 Device Self-Test: Not Supported 00:18:50.392 Directives: Not Supported 00:18:50.392 NVMe-MI: Not Supported 00:18:50.392 Virtualization Management: Not Supported 00:18:50.392 Doorbell Buffer Config: Not Supported 00:18:50.392 Get LBA Status Capability: Not Supported 00:18:50.392 Command & Feature Lockdown Capability: Not Supported 00:18:50.392 Abort Command Limit: 4 00:18:50.392 Async Event Request Limit: 4 00:18:50.392 Number of Firmware Slots: N/A 00:18:50.392 Firmware Slot 1 Read-Only: N/A 00:18:50.392 Firmware Activation Without Reset: N/A 00:18:50.392 Multiple Update Detection Support: N/A 00:18:50.392 Firmware Update Granularity: No Information Provided 00:18:50.392 Per-Namespace SMART Log: No 00:18:50.392 Asymmetric Namespace Access Log Page: Not Supported 00:18:50.392 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:50.392 Command Effects Log Page: Supported 00:18:50.392 Get Log Page Extended Data: Supported 00:18:50.392 Telemetry Log Pages: Not Supported 00:18:50.392 Persistent Event Log Pages: Not Supported 00:18:50.392 Supported Log Pages Log Page: May Support 00:18:50.392 Commands Supported & Effects Log Page: Not Supported 00:18:50.392 Feature Identifiers & Effects Log Page:May Support 00:18:50.392 NVMe-MI Commands & Effects Log Page: May Support 00:18:50.392 Data Area 4 for Telemetry Log: Not Supported 00:18:50.392 Error Log Page Entries Supported: 128 00:18:50.392 Keep Alive: Supported 00:18:50.392 Keep Alive Granularity: 10000 ms 00:18:50.392 00:18:50.392 NVM Command Set Attributes 00:18:50.392 ========================== 00:18:50.392 Submission Queue Entry Size 00:18:50.392 Max: 64 00:18:50.392 Min: 64 00:18:50.392 Completion Queue Entry Size 00:18:50.392 Max: 16 00:18:50.392 Min: 16 00:18:50.392 Number of Namespaces: 32 00:18:50.392 Compare Command: Supported 00:18:50.392 Write Uncorrectable Command: Not Supported 00:18:50.392 Dataset Management Command: Supported 00:18:50.392 Write Zeroes Command: Supported 00:18:50.392 Set Features Save Field: Not Supported 00:18:50.392 Reservations: Not Supported 00:18:50.392 Timestamp: Not Supported 00:18:50.392 Copy: Supported 00:18:50.392 Volatile Write Cache: Present 00:18:50.392 Atomic Write Unit (Normal): 1 00:18:50.392 Atomic Write Unit (PFail): 1 00:18:50.392 Atomic Compare & Write Unit: 1 00:18:50.392 Fused Compare & Write: Supported 00:18:50.392 Scatter-Gather List 00:18:50.392 SGL Command Set: Supported (Dword aligned) 00:18:50.392 SGL Keyed: Not Supported 00:18:50.392 SGL Bit Bucket Descriptor: Not Supported 00:18:50.392 SGL Metadata Pointer: Not Supported 00:18:50.392 Oversized SGL: Not Supported 00:18:50.392 SGL Metadata Address: Not Supported 00:18:50.392 SGL Offset: Not Supported 00:18:50.392 Transport SGL Data Block: Not Supported 00:18:50.392 Replay Protected Memory Block: Not Supported 00:18:50.392 00:18:50.392 Firmware Slot Information 00:18:50.392 ========================= 00:18:50.392 Active slot: 1 00:18:50.392 Slot 1 Firmware Revision: 25.01 00:18:50.392 00:18:50.392 00:18:50.392 Commands Supported and Effects 00:18:50.392 ============================== 00:18:50.392 Admin Commands 00:18:50.392 -------------- 00:18:50.392 Get Log Page (02h): Supported 00:18:50.392 Identify (06h): Supported 00:18:50.392 Abort (08h): Supported 00:18:50.392 Set Features (09h): Supported 00:18:50.392 Get Features (0Ah): Supported 00:18:50.392 Asynchronous Event Request (0Ch): Supported 00:18:50.392 Keep Alive (18h): Supported 00:18:50.392 I/O Commands 00:18:50.392 ------------ 00:18:50.392 Flush (00h): Supported LBA-Change 00:18:50.392 Write (01h): Supported LBA-Change 00:18:50.392 Read (02h): Supported 00:18:50.392 Compare (05h): Supported 00:18:50.392 Write Zeroes (08h): Supported LBA-Change 00:18:50.392 Dataset Management (09h): Supported LBA-Change 00:18:50.392 Copy (19h): Supported LBA-Change 00:18:50.392 00:18:50.392 Error Log 00:18:50.392 ========= 00:18:50.392 00:18:50.392 Arbitration 00:18:50.392 =========== 00:18:50.392 Arbitration Burst: 1 00:18:50.392 00:18:50.392 Power Management 00:18:50.392 ================ 00:18:50.392 Number of Power States: 1 00:18:50.392 Current Power State: Power State #0 00:18:50.392 Power State #0: 00:18:50.392 Max Power: 0.00 W 00:18:50.392 Non-Operational State: Operational 00:18:50.392 Entry Latency: Not Reported 00:18:50.392 Exit Latency: Not Reported 00:18:50.392 Relative Read Throughput: 0 00:18:50.392 Relative Read Latency: 0 00:18:50.392 Relative Write Throughput: 0 00:18:50.392 Relative Write Latency: 0 00:18:50.392 Idle Power: Not Reported 00:18:50.392 Active Power: Not Reported 00:18:50.392 Non-Operational Permissive Mode: Not Supported 00:18:50.392 00:18:50.392 Health Information 00:18:50.392 ================== 00:18:50.392 Critical Warnings: 00:18:50.392 Available Spare Space: OK 00:18:50.392 Temperature: OK 00:18:50.392 Device Reliability: OK 00:18:50.392 Read Only: No 00:18:50.392 Volatile Memory Backup: OK 00:18:50.392 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:50.392 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:50.392 Available Spare: 0% 00:18:50.392 Available Sp[2024-10-09 10:59:10.216708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:50.392 [2024-10-09 10:59:10.216717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:50.392 [2024-10-09 10:59:10.216744] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:50.392 [2024-10-09 10:59:10.216755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.392 [2024-10-09 10:59:10.216762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.392 [2024-10-09 10:59:10.216768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.392 [2024-10-09 10:59:10.216774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:50.392 [2024-10-09 10:59:10.216803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:50.392 [2024-10-09 10:59:10.216813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:50.392 [2024-10-09 10:59:10.217804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.392 [2024-10-09 10:59:10.217845] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:50.392 [2024-10-09 10:59:10.217852] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:50.392 [2024-10-09 10:59:10.218811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:50.392 [2024-10-09 10:59:10.218823] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:50.392 [2024-10-09 10:59:10.218888] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:50.392 [2024-10-09 10:59:10.220835] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:50.392 are Threshold: 0% 00:18:50.392 Life Percentage Used: 0% 00:18:50.392 Data Units Read: 0 00:18:50.392 Data Units Written: 0 00:18:50.392 Host Read Commands: 0 00:18:50.392 Host Write Commands: 0 00:18:50.392 Controller Busy Time: 0 minutes 00:18:50.393 Power Cycles: 0 00:18:50.393 Power On Hours: 0 hours 00:18:50.393 Unsafe Shutdowns: 0 00:18:50.393 Unrecoverable Media Errors: 0 00:18:50.393 Lifetime Error Log Entries: 0 00:18:50.393 Warning Temperature Time: 0 minutes 00:18:50.393 Critical Temperature Time: 0 minutes 00:18:50.393 00:18:50.393 Number of Queues 00:18:50.393 ================ 00:18:50.393 Number of I/O Submission Queues: 127 00:18:50.393 Number of I/O Completion Queues: 127 00:18:50.393 00:18:50.393 Active Namespaces 00:18:50.393 ================= 00:18:50.393 Namespace ID:1 00:18:50.393 Error Recovery Timeout: Unlimited 00:18:50.393 Command Set Identifier: NVM (00h) 00:18:50.393 Deallocate: Supported 00:18:50.393 Deallocated/Unwritten Error: Not Supported 00:18:50.393 Deallocated Read Value: Unknown 00:18:50.393 Deallocate in Write Zeroes: Not Supported 00:18:50.393 Deallocated Guard Field: 0xFFFF 00:18:50.393 Flush: Supported 00:18:50.393 Reservation: Supported 00:18:50.393 Namespace Sharing Capabilities: Multiple Controllers 00:18:50.393 Size (in LBAs): 131072 (0GiB) 00:18:50.393 Capacity (in LBAs): 131072 (0GiB) 00:18:50.393 Utilization (in LBAs): 131072 (0GiB) 00:18:50.393 NGUID: 3CA027B983B44E22A936D68E85C3632E 00:18:50.393 UUID: 3ca027b9-83b4-4e22-a936-d68e85c3632e 00:18:50.393 Thin Provisioning: Not Supported 00:18:50.393 Per-NS Atomic Units: Yes 00:18:50.393 Atomic Boundary Size (Normal): 0 00:18:50.393 Atomic Boundary Size (PFail): 0 00:18:50.393 Atomic Boundary Offset: 0 00:18:50.393 Maximum Single Source Range Length: 65535 00:18:50.393 Maximum Copy Length: 65535 00:18:50.393 Maximum Source Range Count: 1 00:18:50.393 NGUID/EUI64 Never Reused: No 00:18:50.393 Namespace Write Protected: No 00:18:50.393 Number of LBA Formats: 1 00:18:50.393 Current LBA Format: LBA Format #00 00:18:50.393 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:50.393 00:18:50.393 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:50.653 [2024-10-09 10:59:10.516760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.934 Initializing NVMe Controllers 00:18:55.934 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:55.934 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:55.934 Initialization complete. Launching workers. 00:18:55.934 ======================================================== 00:18:55.934 Latency(us) 00:18:55.934 Device Information : IOPS MiB/s Average min max 00:18:55.934 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40008.45 156.28 3199.20 840.63 8042.30 00:18:55.934 ======================================================== 00:18:55.934 Total : 40008.45 156.28 3199.20 840.63 8042.30 00:18:55.934 00:18:55.934 [2024-10-09 10:59:15.528275] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.934 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:55.934 [2024-10-09 10:59:15.807754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:01.313 Initializing NVMe Controllers 00:19:01.313 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:01.313 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:01.313 Initialization complete. Launching workers. 00:19:01.313 ======================================================== 00:19:01.313 Latency(us) 00:19:01.313 Device Information : IOPS MiB/s Average min max 00:19:01.313 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16021.00 62.58 7998.50 5754.33 14555.70 00:19:01.313 ======================================================== 00:19:01.313 Total : 16021.00 62.58 7998.50 5754.33 14555.70 00:19:01.313 00:19:01.313 [2024-10-09 10:59:20.832862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:01.313 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:01.313 [2024-10-09 10:59:21.111294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:06.595 [2024-10-09 10:59:26.168605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:06.595 Initializing NVMe Controllers 00:19:06.595 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:06.595 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:06.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:06.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:06.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:06.595 Initialization complete. Launching workers. 00:19:06.595 Starting thread on core 2 00:19:06.595 Starting thread on core 3 00:19:06.595 Starting thread on core 1 00:19:06.595 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:06.595 [2024-10-09 10:59:26.546727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:09.891 [2024-10-09 10:59:29.597776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:09.891 Initializing NVMe Controllers 00:19:09.891 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.891 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:09.891 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:09.891 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:09.891 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:09.891 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:09.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:09.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:09.891 Initialization complete. Launching workers. 00:19:09.891 Starting thread on core 1 with urgent priority queue 00:19:09.891 Starting thread on core 2 with urgent priority queue 00:19:09.891 Starting thread on core 3 with urgent priority queue 00:19:09.891 Starting thread on core 0 with urgent priority queue 00:19:09.891 SPDK bdev Controller (SPDK1 ) core 0: 8284.33 IO/s 12.07 secs/100000 ios 00:19:09.891 SPDK bdev Controller (SPDK1 ) core 1: 14767.33 IO/s 6.77 secs/100000 ios 00:19:09.891 SPDK bdev Controller (SPDK1 ) core 2: 8050.33 IO/s 12.42 secs/100000 ios 00:19:09.891 SPDK bdev Controller (SPDK1 ) core 3: 14460.67 IO/s 6.92 secs/100000 ios 00:19:09.891 ======================================================== 00:19:09.891 00:19:09.891 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:10.152 [2024-10-09 10:59:29.971780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:10.152 Initializing NVMe Controllers 00:19:10.152 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:10.152 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:10.152 Namespace ID: 1 size: 0GB 00:19:10.152 Initialization complete. 00:19:10.152 INFO: using host memory buffer for IO 00:19:10.152 Hello world! 00:19:10.152 [2024-10-09 10:59:30.003913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:10.152 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:10.414 [2024-10-09 10:59:30.369746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:11.799 Initializing NVMe Controllers 00:19:11.799 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:11.799 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:11.799 Initialization complete. Launching workers. 00:19:11.799 submit (in ns) avg, min, max = 8048.4, 3914.1, 4011084.2 00:19:11.799 complete (in ns) avg, min, max = 17902.3, 2374.7, 4010502.0 00:19:11.799 00:19:11.799 Submit histogram 00:19:11.799 ================ 00:19:11.799 Range in us Cumulative Count 00:19:11.799 3.902 - 3.929: 0.4693% ( 89) 00:19:11.799 3.929 - 3.956: 3.4750% ( 570) 00:19:11.799 3.956 - 3.983: 10.6148% ( 1354) 00:19:11.799 3.983 - 4.009: 22.5796% ( 2269) 00:19:11.799 4.009 - 4.036: 35.5305% ( 2456) 00:19:11.799 4.036 - 4.063: 48.4655% ( 2453) 00:19:11.799 4.063 - 4.090: 65.3343% ( 3199) 00:19:11.799 4.090 - 4.116: 80.6475% ( 2904) 00:19:11.799 4.116 - 4.143: 90.7825% ( 1922) 00:19:11.799 4.143 - 4.170: 96.3615% ( 1058) 00:19:11.799 4.170 - 4.196: 98.5393% ( 413) 00:19:11.799 4.196 - 4.223: 99.1985% ( 125) 00:19:11.799 4.223 - 4.250: 99.3619% ( 31) 00:19:11.799 4.250 - 4.277: 99.4200% ( 11) 00:19:11.799 4.277 - 4.303: 99.4252% ( 1) 00:19:11.799 4.303 - 4.330: 99.4358% ( 2) 00:19:11.799 4.330 - 4.357: 99.4516% ( 3) 00:19:11.799 4.410 - 4.437: 99.4621% ( 2) 00:19:11.799 4.678 - 4.704: 99.4674% ( 1) 00:19:11.799 4.784 - 4.811: 99.4727% ( 1) 00:19:11.799 4.838 - 4.865: 99.4832% ( 2) 00:19:11.799 4.865 - 4.891: 99.4885% ( 1) 00:19:11.799 4.972 - 4.998: 99.4991% ( 2) 00:19:11.799 5.052 - 5.079: 99.5149% ( 3) 00:19:11.799 5.185 - 5.212: 99.5201% ( 1) 00:19:11.799 5.266 - 5.292: 99.5307% ( 2) 00:19:11.799 5.319 - 5.346: 99.5360% ( 1) 00:19:11.799 5.346 - 5.373: 99.5412% ( 1) 00:19:11.799 5.586 - 5.613: 99.5465% ( 1) 00:19:11.799 5.640 - 5.667: 99.5518% ( 1) 00:19:11.799 5.693 - 5.720: 99.5571% ( 1) 00:19:11.799 5.720 - 5.747: 99.5623% ( 1) 00:19:11.799 5.747 - 5.773: 99.5676% ( 1) 00:19:11.799 5.800 - 5.827: 99.5729% ( 1) 00:19:11.799 6.041 - 6.067: 99.5781% ( 1) 00:19:11.799 6.067 - 6.094: 99.5887% ( 2) 00:19:11.799 6.201 - 6.228: 99.5940% ( 1) 00:19:11.799 6.255 - 6.281: 99.5992% ( 1) 00:19:11.799 6.468 - 6.495: 99.6045% ( 1) 00:19:11.799 6.549 - 6.575: 99.6098% ( 1) 00:19:11.799 6.602 - 6.629: 99.6151% ( 1) 00:19:11.799 6.629 - 6.656: 99.6203% ( 1) 00:19:11.799 6.843 - 6.896: 99.6256% ( 1) 00:19:11.799 7.217 - 7.270: 99.6309% ( 1) 00:19:11.799 7.324 - 7.377: 99.6414% ( 2) 00:19:11.799 7.484 - 7.538: 99.6467% ( 1) 00:19:11.799 7.538 - 7.591: 99.6520% ( 1) 00:19:11.799 7.591 - 7.645: 99.6572% ( 1) 00:19:11.799 7.645 - 7.698: 99.6625% ( 1) 00:19:11.799 7.698 - 7.751: 99.6678% ( 1) 00:19:11.799 7.751 - 7.805: 99.6836% ( 3) 00:19:11.799 7.805 - 7.858: 99.7100% ( 5) 00:19:11.799 7.858 - 7.912: 99.7152% ( 1) 00:19:11.799 7.912 - 7.965: 99.7205% ( 1) 00:19:11.799 8.019 - 8.072: 99.7363% ( 3) 00:19:11.799 8.072 - 8.126: 99.7522% ( 3) 00:19:11.799 8.126 - 8.179: 99.7627% ( 2) 00:19:11.799 8.233 - 8.286: 99.7838% ( 4) 00:19:11.799 8.339 - 8.393: 99.7891% ( 1) 00:19:11.799 8.393 - 8.446: 99.7996% ( 2) 00:19:11.799 8.446 - 8.500: 99.8102% ( 2) 00:19:11.799 8.553 - 8.607: 99.8154% ( 1) 00:19:11.799 8.660 - 8.714: 99.8260% ( 2) 00:19:11.799 8.714 - 8.767: 99.8365% ( 2) 00:19:11.799 8.981 - 9.034: 99.8418% ( 1) 00:19:11.799 9.034 - 9.088: 99.8471% ( 1) 00:19:11.799 9.141 - 9.195: 99.8576% ( 2) 00:19:11.799 9.195 - 9.248: 99.8629% ( 1) 00:19:11.799 9.409 - 9.462: 99.8682% ( 1) 00:19:11.799 9.569 - 9.622: 99.8734% ( 1) 00:19:11.799 9.622 - 9.676: 99.8787% ( 1) 00:19:11.799 10.157 - 10.210: 99.8840% ( 1) 00:19:11.799 12.028 - 12.082: 99.8893% ( 1) 00:19:11.799 14.006 - 14.113: 99.8945% ( 1) 00:19:11.799 15.503 - 15.610: 99.8998% ( 1) 00:19:11.799 3229.723 - 3243.408: 99.9051% ( 1) 00:19:11.799 3996.098 - 4023.468: 100.0000% ( 18) 00:19:11.799 00:19:11.799 Complete histogram 00:19:11.799 ================== 00:19:11.799 Range in us Cumulative Count 00:19:11.799 2.366 - [2024-10-09 10:59:31.389177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:11.799 2.379: 0.0105% ( 2) 00:19:11.799 2.379 - 2.392: 0.0686% ( 11) 00:19:11.799 2.392 - 2.406: 0.9861% ( 174) 00:19:11.799 2.406 - 2.419: 1.0599% ( 14) 00:19:11.799 2.419 - 2.432: 1.1126% ( 10) 00:19:11.799 2.432 - 2.446: 15.9407% ( 2812) 00:19:11.799 2.446 - 2.459: 52.8475% ( 6999) 00:19:11.799 2.459 - 2.472: 64.0529% ( 2125) 00:19:11.799 2.472 - 2.486: 74.6045% ( 2001) 00:19:11.799 2.486 - 2.499: 79.9040% ( 1005) 00:19:11.799 2.499 - 2.513: 82.0238% ( 402) 00:19:11.799 2.513 - 2.526: 86.8171% ( 909) 00:19:11.799 2.526 - 2.539: 92.3012% ( 1040) 00:19:11.799 2.539 - 2.553: 95.7551% ( 655) 00:19:11.799 2.553 - 2.566: 97.6956% ( 368) 00:19:11.799 2.566 - 2.579: 98.8821% ( 225) 00:19:11.799 2.579 - 2.593: 99.2038% ( 61) 00:19:11.799 2.593 - 2.606: 99.3092% ( 20) 00:19:11.799 2.606 - 2.619: 99.3145% ( 1) 00:19:11.799 2.619 - 2.633: 99.3198% ( 1) 00:19:11.799 2.713 - 2.726: 99.3250% ( 1) 00:19:11.799 2.913 - 2.927: 99.3356% ( 2) 00:19:11.799 2.967 - 2.980: 99.3409% ( 1) 00:19:11.799 2.980 - 2.994: 99.3514% ( 2) 00:19:11.799 3.007 - 3.020: 99.3567% ( 1) 00:19:11.799 3.074 - 3.087: 99.3619% ( 1) 00:19:11.799 3.141 - 3.154: 99.3672% ( 1) 00:19:11.799 3.181 - 3.194: 99.3725% ( 1) 00:19:11.799 5.346 - 5.373: 99.3778% ( 1) 00:19:11.799 5.399 - 5.426: 99.3830% ( 1) 00:19:11.799 5.506 - 5.533: 99.3883% ( 1) 00:19:11.799 5.747 - 5.773: 99.3989% ( 2) 00:19:11.799 5.773 - 5.800: 99.4041% ( 1) 00:19:11.799 5.800 - 5.827: 99.4094% ( 1) 00:19:11.799 5.827 - 5.854: 99.4147% ( 1) 00:19:11.799 5.907 - 5.934: 99.4252% ( 2) 00:19:11.799 5.934 - 5.961: 99.4305% ( 1) 00:19:11.799 5.961 - 5.987: 99.4358% ( 1) 00:19:11.799 5.987 - 6.014: 99.4410% ( 1) 00:19:11.799 6.041 - 6.067: 99.4463% ( 1) 00:19:11.799 6.067 - 6.094: 99.4516% ( 1) 00:19:11.799 6.148 - 6.174: 99.4569% ( 1) 00:19:11.799 6.255 - 6.281: 99.4621% ( 1) 00:19:11.799 6.308 - 6.335: 99.4674% ( 1) 00:19:11.799 6.362 - 6.388: 99.4727% ( 1) 00:19:11.799 6.388 - 6.415: 99.4832% ( 2) 00:19:11.799 6.522 - 6.549: 99.4885% ( 1) 00:19:11.799 6.629 - 6.656: 99.4991% ( 2) 00:19:11.799 6.656 - 6.682: 99.5043% ( 1) 00:19:11.799 6.789 - 6.816: 99.5096% ( 1) 00:19:11.799 6.896 - 6.950: 99.5201% ( 2) 00:19:11.799 6.950 - 7.003: 99.5307% ( 2) 00:19:11.799 7.003 - 7.056: 99.5360% ( 1) 00:19:11.799 7.056 - 7.110: 99.5465% ( 2) 00:19:11.799 7.110 - 7.163: 99.5571% ( 2) 00:19:11.799 7.163 - 7.217: 99.5781% ( 4) 00:19:11.799 7.270 - 7.324: 99.5834% ( 1) 00:19:11.799 7.324 - 7.377: 99.5887% ( 1) 00:19:11.799 7.538 - 7.591: 99.5940% ( 1) 00:19:11.799 8.072 - 8.126: 99.5992% ( 1) 00:19:11.799 8.126 - 8.179: 99.6045% ( 1) 00:19:11.799 11.387 - 11.440: 99.6098% ( 1) 00:19:11.799 42.553 - 42.766: 99.6151% ( 1) 00:19:11.799 3996.098 - 4023.468: 100.0000% ( 73) 00:19:11.799 00:19:11.799 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:11.799 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:11.799 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:11.799 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:11.799 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:11.799 [ 00:19:11.799 { 00:19:11.799 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:11.799 "subtype": "Discovery", 00:19:11.799 "listen_addresses": [], 00:19:11.799 "allow_any_host": true, 00:19:11.799 "hosts": [] 00:19:11.799 }, 00:19:11.799 { 00:19:11.799 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:11.799 "subtype": "NVMe", 00:19:11.799 "listen_addresses": [ 00:19:11.799 { 00:19:11.799 "trtype": "VFIOUSER", 00:19:11.799 "adrfam": "IPv4", 00:19:11.799 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:11.799 "trsvcid": "0" 00:19:11.799 } 00:19:11.799 ], 00:19:11.799 "allow_any_host": true, 00:19:11.799 "hosts": [], 00:19:11.799 "serial_number": "SPDK1", 00:19:11.799 "model_number": "SPDK bdev Controller", 00:19:11.799 "max_namespaces": 32, 00:19:11.799 "min_cntlid": 1, 00:19:11.799 "max_cntlid": 65519, 00:19:11.799 "namespaces": [ 00:19:11.799 { 00:19:11.799 "nsid": 1, 00:19:11.799 "bdev_name": "Malloc1", 00:19:11.799 "name": "Malloc1", 00:19:11.800 "nguid": "3CA027B983B44E22A936D68E85C3632E", 00:19:11.800 "uuid": "3ca027b9-83b4-4e22-a936-d68e85c3632e" 00:19:11.800 } 00:19:11.800 ] 00:19:11.800 }, 00:19:11.800 { 00:19:11.800 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:11.800 "subtype": "NVMe", 00:19:11.800 "listen_addresses": [ 00:19:11.800 { 00:19:11.800 "trtype": "VFIOUSER", 00:19:11.800 "adrfam": "IPv4", 00:19:11.800 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:11.800 "trsvcid": "0" 00:19:11.800 } 00:19:11.800 ], 00:19:11.800 "allow_any_host": true, 00:19:11.800 "hosts": [], 00:19:11.800 "serial_number": "SPDK2", 00:19:11.800 "model_number": "SPDK bdev Controller", 00:19:11.800 "max_namespaces": 32, 00:19:11.800 "min_cntlid": 1, 00:19:11.800 "max_cntlid": 65519, 00:19:11.800 "namespaces": [ 00:19:11.800 { 00:19:11.800 "nsid": 1, 00:19:11.800 "bdev_name": "Malloc2", 00:19:11.800 "name": "Malloc2", 00:19:11.800 "nguid": "B5BBF2F91028402690D97DE99DF6611B", 00:19:11.800 "uuid": "b5bbf2f9-1028-4026-90d9-7de99df6611b" 00:19:11.800 } 00:19:11.800 ] 00:19:11.800 } 00:19:11.800 ] 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1836705 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:11.800 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:12.060 Malloc3 00:19:12.060 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:12.060 [2024-10-09 10:59:31.896769] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:12.060 [2024-10-09 10:59:31.979139] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:12.060 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:12.060 Asynchronous Event Request test 00:19:12.060 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:12.060 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:12.060 Registering asynchronous event callbacks... 00:19:12.060 Starting namespace attribute notice tests for all controllers... 00:19:12.060 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:12.060 aer_cb - Changed Namespace 00:19:12.060 Cleaning up... 00:19:12.321 [ 00:19:12.321 { 00:19:12.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:12.321 "subtype": "Discovery", 00:19:12.321 "listen_addresses": [], 00:19:12.321 "allow_any_host": true, 00:19:12.321 "hosts": [] 00:19:12.321 }, 00:19:12.321 { 00:19:12.321 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:12.321 "subtype": "NVMe", 00:19:12.321 "listen_addresses": [ 00:19:12.321 { 00:19:12.321 "trtype": "VFIOUSER", 00:19:12.321 "adrfam": "IPv4", 00:19:12.321 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:12.321 "trsvcid": "0" 00:19:12.321 } 00:19:12.321 ], 00:19:12.321 "allow_any_host": true, 00:19:12.321 "hosts": [], 00:19:12.321 "serial_number": "SPDK1", 00:19:12.321 "model_number": "SPDK bdev Controller", 00:19:12.321 "max_namespaces": 32, 00:19:12.321 "min_cntlid": 1, 00:19:12.321 "max_cntlid": 65519, 00:19:12.321 "namespaces": [ 00:19:12.321 { 00:19:12.321 "nsid": 1, 00:19:12.321 "bdev_name": "Malloc1", 00:19:12.321 "name": "Malloc1", 00:19:12.321 "nguid": "3CA027B983B44E22A936D68E85C3632E", 00:19:12.321 "uuid": "3ca027b9-83b4-4e22-a936-d68e85c3632e" 00:19:12.321 }, 00:19:12.321 { 00:19:12.321 "nsid": 2, 00:19:12.321 "bdev_name": "Malloc3", 00:19:12.321 "name": "Malloc3", 00:19:12.321 "nguid": "DA528331C51C42609C9082B3A55055C8", 00:19:12.321 "uuid": "da528331-c51c-4260-9c90-82b3a55055c8" 00:19:12.321 } 00:19:12.321 ] 00:19:12.321 }, 00:19:12.321 { 00:19:12.321 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:12.321 "subtype": "NVMe", 00:19:12.321 "listen_addresses": [ 00:19:12.321 { 00:19:12.321 "trtype": "VFIOUSER", 00:19:12.321 "adrfam": "IPv4", 00:19:12.321 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:12.321 "trsvcid": "0" 00:19:12.321 } 00:19:12.321 ], 00:19:12.321 "allow_any_host": true, 00:19:12.321 "hosts": [], 00:19:12.321 "serial_number": "SPDK2", 00:19:12.321 "model_number": "SPDK bdev Controller", 00:19:12.321 "max_namespaces": 32, 00:19:12.321 "min_cntlid": 1, 00:19:12.321 "max_cntlid": 65519, 00:19:12.321 "namespaces": [ 00:19:12.321 { 00:19:12.321 "nsid": 1, 00:19:12.321 "bdev_name": "Malloc2", 00:19:12.321 "name": "Malloc2", 00:19:12.321 "nguid": "B5BBF2F91028402690D97DE99DF6611B", 00:19:12.321 "uuid": "b5bbf2f9-1028-4026-90d9-7de99df6611b" 00:19:12.321 } 00:19:12.321 ] 00:19:12.321 } 00:19:12.321 ] 00:19:12.321 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1836705 00:19:12.321 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:12.321 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:12.321 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:12.321 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:12.321 [2024-10-09 10:59:32.204009] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:19:12.321 [2024-10-09 10:59:32.204052] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836718 ] 00:19:12.321 [2024-10-09 10:59:32.316799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:12.584 [2024-10-09 10:59:32.336088] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:12.584 [2024-10-09 10:59:32.344650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:12.584 [2024-10-09 10:59:32.344673] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f67f3f65000 00:19:12.584 [2024-10-09 10:59:32.345665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.346653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.347656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.348662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.349662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.350664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.351672] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.352674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:12.584 [2024-10-09 10:59:32.353675] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:12.584 [2024-10-09 10:59:32.353685] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f67f2c68000 00:19:12.584 [2024-10-09 10:59:32.355014] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:12.584 [2024-10-09 10:59:32.371219] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:12.584 [2024-10-09 10:59:32.371244] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:19:12.584 [2024-10-09 10:59:32.376312] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:12.584 [2024-10-09 10:59:32.376355] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:12.584 [2024-10-09 10:59:32.376437] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:19:12.584 [2024-10-09 10:59:32.376455] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:19:12.584 [2024-10-09 10:59:32.376461] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:19:12.584 [2024-10-09 10:59:32.377320] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:12.584 [2024-10-09 10:59:32.377333] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:19:12.584 [2024-10-09 10:59:32.377340] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:19:12.584 [2024-10-09 10:59:32.378324] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:12.584 [2024-10-09 10:59:32.378334] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:19:12.584 [2024-10-09 10:59:32.378341] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.379332] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:12.584 [2024-10-09 10:59:32.379342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.380338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:12.584 [2024-10-09 10:59:32.380347] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:19:12.584 [2024-10-09 10:59:32.380352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.380359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.380469] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:19:12.584 [2024-10-09 10:59:32.380474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.380479] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003a0000 00:19:12.584 [2024-10-09 10:59:32.381344] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x20000039e000 00:19:12.584 [2024-10-09 10:59:32.382345] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:12.584 [2024-10-09 10:59:32.383345] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:12.584 [2024-10-09 10:59:32.384349] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.584 [2024-10-09 10:59:32.384393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:12.584 [2024-10-09 10:59:32.385357] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:12.584 [2024-10-09 10:59:32.385365] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:12.584 [2024-10-09 10:59:32.385370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:19:12.584 [2024-10-09 10:59:32.385391] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:19:12.584 [2024-10-09 10:59:32.385399] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:19:12.584 [2024-10-09 10:59:32.385413] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:12.584 [2024-10-09 10:59:32.385418] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:12.584 [2024-10-09 10:59:32.385422] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.584 [2024-10-09 10:59:32.385434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:12.584 [2024-10-09 10:59:32.393477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:12.584 [2024-10-09 10:59:32.393490] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:19:12.584 [2024-10-09 10:59:32.393495] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:19:12.584 [2024-10-09 10:59:32.393500] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:19:12.584 [2024-10-09 10:59:32.393504] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:12.584 [2024-10-09 10:59:32.393510] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:19:12.584 [2024-10-09 10:59:32.393515] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:19:12.584 [2024-10-09 10:59:32.393519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:19:12.584 [2024-10-09 10:59:32.393527] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:19:12.584 [2024-10-09 10:59:32.393538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:12.584 [2024-10-09 10:59:32.401475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:12.584 [2024-10-09 10:59:32.401488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.585 [2024-10-09 10:59:32.401497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.585 [2024-10-09 10:59:32.401505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.585 [2024-10-09 10:59:32.401513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.585 [2024-10-09 10:59:32.401518] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.401528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.401537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.409471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.409479] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:19:12.585 [2024-10-09 10:59:32.409484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.409491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.409501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.409510] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.417538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.417546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.417554] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d9000 len:4096 00:19:12.585 [2024-10-09 10:59:32.417559] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d9000 00:19:12.585 [2024-10-09 10:59:32.417562] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.417569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002d9000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.425473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.425484] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:19:12.585 [2024-10-09 10:59:32.425496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.425504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.425511] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:12.585 [2024-10-09 10:59:32.425515] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:12.585 [2024-10-09 10:59:32.425519] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.425525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.433474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.433489] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.433497] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.433504] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:4096 00:19:12.585 [2024-10-09 10:59:32.433508] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:12.585 [2024-10-09 10:59:32.433512] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.433518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.441474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.441486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441509] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441525] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:19:12.585 [2024-10-09 10:59:32.441529] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:19:12.585 [2024-10-09 10:59:32.441534] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:19:12.585 [2024-10-09 10:59:32.441551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.449471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.449486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.457472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.457486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.465477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.465490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.473472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.473491] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d6000 len:8192 00:19:12.585 [2024-10-09 10:59:32.473496] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d6000 00:19:12.585 [2024-10-09 10:59:32.473500] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002d7000 00:19:12.585 [2024-10-09 10:59:32.473503] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002d7000 00:19:12.585 [2024-10-09 10:59:32.473507] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:12.585 [2024-10-09 10:59:32.473513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002d6000 PRP2 0x2000002d7000 00:19:12.585 [2024-10-09 10:59:32.473520] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002dc000 len:512 00:19:12.585 [2024-10-09 10:59:32.473525] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002dc000 00:19:12.585 [2024-10-09 10:59:32.473528] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.473534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002dc000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.473541] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002db000 len:512 00:19:12.585 [2024-10-09 10:59:32.473548] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002db000 00:19:12.585 [2024-10-09 10:59:32.473551] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.473557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002db000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.473565] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002d4000 len:4096 00:19:12.585 [2024-10-09 10:59:32.473569] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002d4000 00:19:12.585 [2024-10-09 10:59:32.473573] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:12.585 [2024-10-09 10:59:32.473579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002d4000 PRP2 0x0 00:19:12.585 [2024-10-09 10:59:32.481474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.481489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.481500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:12.585 [2024-10-09 10:59:32.481507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:12.585 ===================================================== 00:19:12.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:12.585 ===================================================== 00:19:12.585 Controller Capabilities/Features 00:19:12.585 ================================ 00:19:12.585 Vendor ID: 4e58 00:19:12.585 Subsystem Vendor ID: 4e58 00:19:12.585 Serial Number: SPDK2 00:19:12.585 Model Number: SPDK bdev Controller 00:19:12.585 Firmware Version: 25.01 00:19:12.585 Recommended Arb Burst: 6 00:19:12.585 IEEE OUI Identifier: 8d 6b 50 00:19:12.585 Multi-path I/O 00:19:12.585 May have multiple subsystem ports: Yes 00:19:12.585 May have multiple controllers: Yes 00:19:12.585 Associated with SR-IOV VF: No 00:19:12.585 Max Data Transfer Size: 131072 00:19:12.585 Max Number of Namespaces: 32 00:19:12.585 Max Number of I/O Queues: 127 00:19:12.585 NVMe Specification Version (VS): 1.3 00:19:12.585 NVMe Specification Version (Identify): 1.3 00:19:12.585 Maximum Queue Entries: 256 00:19:12.585 Contiguous Queues Required: Yes 00:19:12.585 Arbitration Mechanisms Supported 00:19:12.585 Weighted Round Robin: Not Supported 00:19:12.585 Vendor Specific: Not Supported 00:19:12.585 Reset Timeout: 15000 ms 00:19:12.585 Doorbell Stride: 4 bytes 00:19:12.585 NVM Subsystem Reset: Not Supported 00:19:12.585 Command Sets Supported 00:19:12.586 NVM Command Set: Supported 00:19:12.586 Boot Partition: Not Supported 00:19:12.586 Memory Page Size Minimum: 4096 bytes 00:19:12.586 Memory Page Size Maximum: 4096 bytes 00:19:12.586 Persistent Memory Region: Not Supported 00:19:12.586 Optional Asynchronous Events Supported 00:19:12.586 Namespace Attribute Notices: Supported 00:19:12.586 Firmware Activation Notices: Not Supported 00:19:12.586 ANA Change Notices: Not Supported 00:19:12.586 PLE Aggregate Log Change Notices: Not Supported 00:19:12.586 LBA Status Info Alert Notices: Not Supported 00:19:12.586 EGE Aggregate Log Change Notices: Not Supported 00:19:12.586 Normal NVM Subsystem Shutdown event: Not Supported 00:19:12.586 Zone Descriptor Change Notices: Not Supported 00:19:12.586 Discovery Log Change Notices: Not Supported 00:19:12.586 Controller Attributes 00:19:12.586 128-bit Host Identifier: Supported 00:19:12.586 Non-Operational Permissive Mode: Not Supported 00:19:12.586 NVM Sets: Not Supported 00:19:12.586 Read Recovery Levels: Not Supported 00:19:12.586 Endurance Groups: Not Supported 00:19:12.586 Predictable Latency Mode: Not Supported 00:19:12.586 Traffic Based Keep ALive: Not Supported 00:19:12.586 Namespace Granularity: Not Supported 00:19:12.586 SQ Associations: Not Supported 00:19:12.586 UUID List: Not Supported 00:19:12.586 Multi-Domain Subsystem: Not Supported 00:19:12.586 Fixed Capacity Management: Not Supported 00:19:12.586 Variable Capacity Management: Not Supported 00:19:12.586 Delete Endurance Group: Not Supported 00:19:12.586 Delete NVM Set: Not Supported 00:19:12.586 Extended LBA Formats Supported: Not Supported 00:19:12.586 Flexible Data Placement Supported: Not Supported 00:19:12.586 00:19:12.586 Controller Memory Buffer Support 00:19:12.586 ================================ 00:19:12.586 Supported: No 00:19:12.586 00:19:12.586 Persistent Memory Region Support 00:19:12.586 ================================ 00:19:12.586 Supported: No 00:19:12.586 00:19:12.586 Admin Command Set Attributes 00:19:12.586 ============================ 00:19:12.586 Security Send/Receive: Not Supported 00:19:12.586 Format NVM: Not Supported 00:19:12.586 Firmware Activate/Download: Not Supported 00:19:12.586 Namespace Management: Not Supported 00:19:12.586 Device Self-Test: Not Supported 00:19:12.586 Directives: Not Supported 00:19:12.586 NVMe-MI: Not Supported 00:19:12.586 Virtualization Management: Not Supported 00:19:12.586 Doorbell Buffer Config: Not Supported 00:19:12.586 Get LBA Status Capability: Not Supported 00:19:12.586 Command & Feature Lockdown Capability: Not Supported 00:19:12.586 Abort Command Limit: 4 00:19:12.586 Async Event Request Limit: 4 00:19:12.586 Number of Firmware Slots: N/A 00:19:12.586 Firmware Slot 1 Read-Only: N/A 00:19:12.586 Firmware Activation Without Reset: N/A 00:19:12.586 Multiple Update Detection Support: N/A 00:19:12.586 Firmware Update Granularity: No Information Provided 00:19:12.586 Per-Namespace SMART Log: No 00:19:12.586 Asymmetric Namespace Access Log Page: Not Supported 00:19:12.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:12.586 Command Effects Log Page: Supported 00:19:12.586 Get Log Page Extended Data: Supported 00:19:12.586 Telemetry Log Pages: Not Supported 00:19:12.586 Persistent Event Log Pages: Not Supported 00:19:12.586 Supported Log Pages Log Page: May Support 00:19:12.586 Commands Supported & Effects Log Page: Not Supported 00:19:12.586 Feature Identifiers & Effects Log Page:May Support 00:19:12.586 NVMe-MI Commands & Effects Log Page: May Support 00:19:12.586 Data Area 4 for Telemetry Log: Not Supported 00:19:12.586 Error Log Page Entries Supported: 128 00:19:12.586 Keep Alive: Supported 00:19:12.586 Keep Alive Granularity: 10000 ms 00:19:12.586 00:19:12.586 NVM Command Set Attributes 00:19:12.586 ========================== 00:19:12.586 Submission Queue Entry Size 00:19:12.586 Max: 64 00:19:12.586 Min: 64 00:19:12.586 Completion Queue Entry Size 00:19:12.586 Max: 16 00:19:12.586 Min: 16 00:19:12.586 Number of Namespaces: 32 00:19:12.586 Compare Command: Supported 00:19:12.586 Write Uncorrectable Command: Not Supported 00:19:12.586 Dataset Management Command: Supported 00:19:12.586 Write Zeroes Command: Supported 00:19:12.586 Set Features Save Field: Not Supported 00:19:12.586 Reservations: Not Supported 00:19:12.586 Timestamp: Not Supported 00:19:12.586 Copy: Supported 00:19:12.586 Volatile Write Cache: Present 00:19:12.586 Atomic Write Unit (Normal): 1 00:19:12.586 Atomic Write Unit (PFail): 1 00:19:12.586 Atomic Compare & Write Unit: 1 00:19:12.586 Fused Compare & Write: Supported 00:19:12.586 Scatter-Gather List 00:19:12.586 SGL Command Set: Supported (Dword aligned) 00:19:12.586 SGL Keyed: Not Supported 00:19:12.586 SGL Bit Bucket Descriptor: Not Supported 00:19:12.586 SGL Metadata Pointer: Not Supported 00:19:12.586 Oversized SGL: Not Supported 00:19:12.586 SGL Metadata Address: Not Supported 00:19:12.586 SGL Offset: Not Supported 00:19:12.586 Transport SGL Data Block: Not Supported 00:19:12.586 Replay Protected Memory Block: Not Supported 00:19:12.586 00:19:12.586 Firmware Slot Information 00:19:12.586 ========================= 00:19:12.586 Active slot: 1 00:19:12.586 Slot 1 Firmware Revision: 25.01 00:19:12.586 00:19:12.586 00:19:12.586 Commands Supported and Effects 00:19:12.586 ============================== 00:19:12.586 Admin Commands 00:19:12.586 -------------- 00:19:12.586 Get Log Page (02h): Supported 00:19:12.586 Identify (06h): Supported 00:19:12.586 Abort (08h): Supported 00:19:12.586 Set Features (09h): Supported 00:19:12.586 Get Features (0Ah): Supported 00:19:12.586 Asynchronous Event Request (0Ch): Supported 00:19:12.586 Keep Alive (18h): Supported 00:19:12.586 I/O Commands 00:19:12.586 ------------ 00:19:12.586 Flush (00h): Supported LBA-Change 00:19:12.586 Write (01h): Supported LBA-Change 00:19:12.586 Read (02h): Supported 00:19:12.586 Compare (05h): Supported 00:19:12.586 Write Zeroes (08h): Supported LBA-Change 00:19:12.586 Dataset Management (09h): Supported LBA-Change 00:19:12.586 Copy (19h): Supported LBA-Change 00:19:12.586 00:19:12.586 Error Log 00:19:12.586 ========= 00:19:12.586 00:19:12.586 Arbitration 00:19:12.586 =========== 00:19:12.586 Arbitration Burst: 1 00:19:12.586 00:19:12.586 Power Management 00:19:12.586 ================ 00:19:12.586 Number of Power States: 1 00:19:12.586 Current Power State: Power State #0 00:19:12.586 Power State #0: 00:19:12.586 Max Power: 0.00 W 00:19:12.586 Non-Operational State: Operational 00:19:12.586 Entry Latency: Not Reported 00:19:12.586 Exit Latency: Not Reported 00:19:12.586 Relative Read Throughput: 0 00:19:12.586 Relative Read Latency: 0 00:19:12.586 Relative Write Throughput: 0 00:19:12.586 Relative Write Latency: 0 00:19:12.586 Idle Power: Not Reported 00:19:12.586 Active Power: Not Reported 00:19:12.586 Non-Operational Permissive Mode: Not Supported 00:19:12.586 00:19:12.586 Health Information 00:19:12.586 ================== 00:19:12.586 Critical Warnings: 00:19:12.586 Available Spare Space: OK 00:19:12.586 Temperature: OK 00:19:12.586 Device Reliability: OK 00:19:12.586 Read Only: No 00:19:12.586 Volatile Memory Backup: OK 00:19:12.586 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:12.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:12.586 Available Spare: 0% 00:19:12.586 Available Sp[2024-10-09 10:59:32.481610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:12.586 [2024-10-09 10:59:32.489474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:12.586 [2024-10-09 10:59:32.489506] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:19:12.586 [2024-10-09 10:59:32.489515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.586 [2024-10-09 10:59:32.489522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.586 [2024-10-09 10:59:32.489529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.586 [2024-10-09 10:59:32.489535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.586 [2024-10-09 10:59:32.489586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:12.586 [2024-10-09 10:59:32.489597] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:12.586 [2024-10-09 10:59:32.490588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.586 [2024-10-09 10:59:32.490639] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:19:12.586 [2024-10-09 10:59:32.490646] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:19:12.586 [2024-10-09 10:59:32.491593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:12.586 [2024-10-09 10:59:32.491606] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:19:12.586 [2024-10-09 10:59:32.491660] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:12.586 [2024-10-09 10:59:32.493038] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:12.586 are Threshold: 0% 00:19:12.586 Life Percentage Used: 0% 00:19:12.586 Data Units Read: 0 00:19:12.586 Data Units Written: 0 00:19:12.586 Host Read Commands: 0 00:19:12.587 Host Write Commands: 0 00:19:12.587 Controller Busy Time: 0 minutes 00:19:12.587 Power Cycles: 0 00:19:12.587 Power On Hours: 0 hours 00:19:12.587 Unsafe Shutdowns: 0 00:19:12.587 Unrecoverable Media Errors: 0 00:19:12.587 Lifetime Error Log Entries: 0 00:19:12.587 Warning Temperature Time: 0 minutes 00:19:12.587 Critical Temperature Time: 0 minutes 00:19:12.587 00:19:12.587 Number of Queues 00:19:12.587 ================ 00:19:12.587 Number of I/O Submission Queues: 127 00:19:12.587 Number of I/O Completion Queues: 127 00:19:12.587 00:19:12.587 Active Namespaces 00:19:12.587 ================= 00:19:12.587 Namespace ID:1 00:19:12.587 Error Recovery Timeout: Unlimited 00:19:12.587 Command Set Identifier: NVM (00h) 00:19:12.587 Deallocate: Supported 00:19:12.587 Deallocated/Unwritten Error: Not Supported 00:19:12.587 Deallocated Read Value: Unknown 00:19:12.587 Deallocate in Write Zeroes: Not Supported 00:19:12.587 Deallocated Guard Field: 0xFFFF 00:19:12.587 Flush: Supported 00:19:12.587 Reservation: Supported 00:19:12.587 Namespace Sharing Capabilities: Multiple Controllers 00:19:12.587 Size (in LBAs): 131072 (0GiB) 00:19:12.587 Capacity (in LBAs): 131072 (0GiB) 00:19:12.587 Utilization (in LBAs): 131072 (0GiB) 00:19:12.587 NGUID: B5BBF2F91028402690D97DE99DF6611B 00:19:12.587 UUID: b5bbf2f9-1028-4026-90d9-7de99df6611b 00:19:12.587 Thin Provisioning: Not Supported 00:19:12.587 Per-NS Atomic Units: Yes 00:19:12.587 Atomic Boundary Size (Normal): 0 00:19:12.587 Atomic Boundary Size (PFail): 0 00:19:12.587 Atomic Boundary Offset: 0 00:19:12.587 Maximum Single Source Range Length: 65535 00:19:12.587 Maximum Copy Length: 65535 00:19:12.587 Maximum Source Range Count: 1 00:19:12.587 NGUID/EUI64 Never Reused: No 00:19:12.587 Namespace Write Protected: No 00:19:12.587 Number of LBA Formats: 1 00:19:12.587 Current LBA Format: LBA Format #00 00:19:12.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:12.587 00:19:12.587 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:12.848 [2024-10-09 10:59:32.775815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:18.132 Initializing NVMe Controllers 00:19:18.132 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:18.132 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:18.132 Initialization complete. Launching workers. 00:19:18.132 ======================================================== 00:19:18.132 Latency(us) 00:19:18.132 Device Information : IOPS MiB/s Average min max 00:19:18.132 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40055.18 156.47 3195.87 840.13 6956.43 00:19:18.132 ======================================================== 00:19:18.132 Total : 40055.18 156.47 3195.87 840.13 6956.43 00:19:18.132 00:19:18.132 [2024-10-09 10:59:37.865653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:18.132 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:18.392 [2024-10-09 10:59:38.144894] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:23.674 Initializing NVMe Controllers 00:19:23.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:23.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:23.674 Initialization complete. Launching workers. 00:19:23.674 ======================================================== 00:19:23.674 Latency(us) 00:19:23.674 Device Information : IOPS MiB/s Average min max 00:19:23.674 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32191.20 125.75 3978.25 1109.16 10570.03 00:19:23.674 ======================================================== 00:19:23.674 Total : 32191.20 125.75 3978.25 1109.16 10570.03 00:19:23.674 00:19:23.674 [2024-10-09 10:59:43.153814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:23.674 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:23.674 [2024-10-09 10:59:43.446613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:28.990 [2024-10-09 10:59:48.569542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:28.990 Initializing NVMe Controllers 00:19:28.990 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:28.990 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:28.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:28.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:28.990 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:28.990 Initialization complete. Launching workers. 00:19:28.990 Starting thread on core 2 00:19:28.990 Starting thread on core 3 00:19:28.990 Starting thread on core 1 00:19:28.990 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:28.990 [2024-10-09 10:59:48.935762] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:32.281 [2024-10-09 10:59:52.162595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:32.281 Initializing NVMe Controllers 00:19:32.281 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.281 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.281 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:32.281 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:32.281 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:32.281 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:32.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:32.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:32.281 Initialization complete. Launching workers. 00:19:32.281 Starting thread on core 1 with urgent priority queue 00:19:32.281 Starting thread on core 2 with urgent priority queue 00:19:32.281 Starting thread on core 3 with urgent priority queue 00:19:32.281 Starting thread on core 0 with urgent priority queue 00:19:32.281 SPDK bdev Controller (SPDK2 ) core 0: 2577.00 IO/s 38.80 secs/100000 ios 00:19:32.281 SPDK bdev Controller (SPDK2 ) core 1: 3190.67 IO/s 31.34 secs/100000 ios 00:19:32.281 SPDK bdev Controller (SPDK2 ) core 2: 2377.67 IO/s 42.06 secs/100000 ios 00:19:32.281 SPDK bdev Controller (SPDK2 ) core 3: 2457.67 IO/s 40.69 secs/100000 ios 00:19:32.281 ======================================================== 00:19:32.281 00:19:32.281 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:32.541 [2024-10-09 10:59:52.536796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:32.801 Initializing NVMe Controllers 00:19:32.801 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.801 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:32.801 Namespace ID: 1 size: 0GB 00:19:32.801 Initialization complete. 00:19:32.801 INFO: using host memory buffer for IO 00:19:32.801 Hello world! 00:19:32.801 [2024-10-09 10:59:52.545830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:32.801 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:33.061 [2024-10-09 10:59:52.905711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:34.001 Initializing NVMe Controllers 00:19:34.001 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:34.001 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:34.001 Initialization complete. Launching workers. 00:19:34.001 submit (in ns) avg, min, max = 8822.0, 3915.8, 4011432.5 00:19:34.001 complete (in ns) avg, min, max = 17077.7, 2378.0, 7009988.3 00:19:34.001 00:19:34.001 Submit histogram 00:19:34.001 ================ 00:19:34.001 Range in us Cumulative Count 00:19:34.001 3.902 - 3.929: 0.5626% ( 107) 00:19:34.001 3.929 - 3.956: 3.7014% ( 597) 00:19:34.001 3.956 - 3.983: 10.8149% ( 1353) 00:19:34.001 3.983 - 4.009: 21.5195% ( 2036) 00:19:34.001 4.009 - 4.036: 34.1798% ( 2408) 00:19:34.001 4.036 - 4.063: 47.0715% ( 2452) 00:19:34.001 4.063 - 4.090: 62.6341% ( 2960) 00:19:34.001 4.090 - 4.116: 78.5962% ( 3036) 00:19:34.001 4.116 - 4.143: 90.1893% ( 2205) 00:19:34.001 4.143 - 4.170: 96.2303% ( 1149) 00:19:34.001 4.170 - 4.196: 98.6698% ( 464) 00:19:34.001 4.196 - 4.223: 99.2482% ( 110) 00:19:34.001 4.223 - 4.250: 99.3691% ( 23) 00:19:34.001 4.250 - 4.277: 99.3796% ( 2) 00:19:34.001 4.277 - 4.303: 99.3849% ( 1) 00:19:34.001 4.303 - 4.330: 99.3954% ( 2) 00:19:34.001 4.357 - 4.384: 99.4006% ( 1) 00:19:34.001 4.384 - 4.410: 99.4059% ( 1) 00:19:34.001 4.597 - 4.624: 99.4111% ( 1) 00:19:34.001 4.624 - 4.651: 99.4164% ( 1) 00:19:34.001 4.678 - 4.704: 99.4217% ( 1) 00:19:34.001 4.784 - 4.811: 99.4269% ( 1) 00:19:34.001 4.865 - 4.891: 99.4322% ( 1) 00:19:34.001 4.891 - 4.918: 99.4374% ( 1) 00:19:34.001 4.972 - 4.998: 99.4479% ( 2) 00:19:34.001 4.998 - 5.025: 99.4532% ( 1) 00:19:34.001 5.132 - 5.159: 99.4585% ( 1) 00:19:34.001 5.399 - 5.426: 99.4637% ( 1) 00:19:34.001 5.426 - 5.453: 99.4690% ( 1) 00:19:34.001 5.453 - 5.479: 99.4848% ( 3) 00:19:34.001 6.041 - 6.067: 99.4953% ( 2) 00:19:34.001 6.067 - 6.094: 99.5216% ( 5) 00:19:34.001 6.094 - 6.121: 99.5268% ( 1) 00:19:34.001 6.148 - 6.174: 99.5321% ( 1) 00:19:34.001 6.201 - 6.228: 99.5373% ( 1) 00:19:34.001 6.335 - 6.362: 99.5426% ( 1) 00:19:34.001 6.495 - 6.522: 99.5478% ( 1) 00:19:34.001 6.549 - 6.575: 99.5531% ( 1) 00:19:34.001 6.896 - 6.950: 99.5584% ( 1) 00:19:34.001 7.217 - 7.270: 99.5636% ( 1) 00:19:34.001 7.324 - 7.377: 99.5741% ( 2) 00:19:34.001 7.431 - 7.484: 99.5846% ( 2) 00:19:34.001 7.591 - 7.645: 99.5899% ( 1) 00:19:34.001 7.645 - 7.698: 99.6004% ( 2) 00:19:34.001 7.751 - 7.805: 99.6057% ( 1) 00:19:34.001 7.805 - 7.858: 99.6109% ( 1) 00:19:34.001 7.912 - 7.965: 99.6162% ( 1) 00:19:34.001 7.965 - 8.019: 99.6320% ( 3) 00:19:34.001 8.019 - 8.072: 99.6425% ( 2) 00:19:34.001 8.126 - 8.179: 99.6530% ( 2) 00:19:34.001 8.179 - 8.233: 99.6583% ( 1) 00:19:34.001 8.286 - 8.339: 99.6793% ( 4) 00:19:34.001 8.393 - 8.446: 99.6845% ( 1) 00:19:34.001 8.446 - 8.500: 99.7003% ( 3) 00:19:34.001 8.500 - 8.553: 99.7108% ( 2) 00:19:34.001 8.553 - 8.607: 99.7161% ( 1) 00:19:34.001 8.607 - 8.660: 99.7266% ( 2) 00:19:34.001 8.660 - 8.714: 99.7319% ( 1) 00:19:34.001 8.714 - 8.767: 99.7371% ( 1) 00:19:34.001 8.821 - 8.874: 99.7424% ( 1) 00:19:34.001 8.874 - 8.927: 99.7634% ( 4) 00:19:34.001 8.927 - 8.981: 99.7739% ( 2) 00:19:34.001 8.981 - 9.034: 99.8002% ( 5) 00:19:34.001 9.034 - 9.088: 99.8055% ( 1) 00:19:34.001 9.195 - 9.248: 99.8107% ( 1) 00:19:34.001 9.248 - 9.302: 99.8160% ( 1) 00:19:34.001 9.355 - 9.409: 99.8212% ( 1) 00:19:34.001 9.462 - 9.516: 99.8265% ( 1) 00:19:34.001 9.516 - 9.569: 99.8370% ( 2) 00:19:34.001 9.622 - 9.676: 99.8423% ( 1) 00:19:34.001 9.943 - 9.997: 99.8528% ( 2) 00:19:34.001 10.050 - 10.104: 99.8633% ( 2) 00:19:34.001 15.503 - 15.610: 99.8686% ( 1) 00:19:34.001 15.824 - 15.931: 99.8738% ( 1) 00:19:34.001 17.213 - 17.320: 99.8791% ( 1) 00:19:34.001 2039.105 - 2052.790: 99.8843% ( 1) 00:19:34.001 3996.098 - 4023.468: 100.0000% ( 22) 00:19:34.001 00:19:34.001 Complete histogram 00:19:34.001 ================== 00:19:34.001 Range in us Cumulative Count 00:19:34.261 2.366 - [2024-10-09 10:59:54.003155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:34.261 2.379: 0.0053% ( 1) 00:19:34.261 2.379 - 2.392: 0.0105% ( 1) 00:19:34.261 2.392 - 2.406: 0.4732% ( 88) 00:19:34.261 2.406 - 2.419: 1.2250% ( 143) 00:19:34.261 2.419 - 2.432: 1.3249% ( 19) 00:19:34.261 2.432 - 2.446: 1.4248% ( 19) 00:19:34.261 2.446 - 2.459: 6.1777% ( 904) 00:19:34.261 2.459 - 2.472: 50.7413% ( 8476) 00:19:34.261 2.472 - 2.486: 61.5510% ( 2056) 00:19:34.261 2.486 - 2.499: 73.5962% ( 2291) 00:19:34.261 2.499 - 2.513: 79.2008% ( 1066) 00:19:34.261 2.513 - 2.526: 81.6772% ( 471) 00:19:34.261 2.526 - 2.539: 86.0726% ( 836) 00:19:34.261 2.539 - 2.553: 91.8665% ( 1102) 00:19:34.261 2.553 - 2.566: 95.7518% ( 739) 00:19:34.261 2.566 - 2.579: 97.7182% ( 374) 00:19:34.261 2.579 - 2.593: 98.8013% ( 206) 00:19:34.261 2.593 - 2.606: 99.1851% ( 73) 00:19:34.261 2.606 - 2.619: 99.2639% ( 15) 00:19:34.261 2.619 - 2.633: 99.2902% ( 5) 00:19:34.261 2.633 - 2.646: 99.2955% ( 1) 00:19:34.261 2.646 - 2.660: 99.3007% ( 1) 00:19:34.261 2.700 - 2.713: 99.3060% ( 1) 00:19:34.261 2.927 - 2.940: 99.3165% ( 2) 00:19:34.261 3.060 - 3.074: 99.3270% ( 2) 00:19:34.261 3.087 - 3.101: 99.3323% ( 1) 00:19:34.261 3.101 - 3.114: 99.3375% ( 1) 00:19:34.261 3.114 - 3.127: 99.3533% ( 3) 00:19:34.261 4.704 - 4.731: 99.3638% ( 2) 00:19:34.261 4.945 - 4.972: 99.3691% ( 1) 00:19:34.261 4.972 - 4.998: 99.3743% ( 1) 00:19:34.261 5.426 - 5.453: 99.3796% ( 1) 00:19:34.261 5.560 - 5.586: 99.3849% ( 1) 00:19:34.261 5.667 - 5.693: 99.3954% ( 2) 00:19:34.261 5.827 - 5.854: 99.4006% ( 1) 00:19:34.261 5.907 - 5.934: 99.4059% ( 1) 00:19:34.261 5.934 - 5.961: 99.4111% ( 1) 00:19:34.261 5.961 - 5.987: 99.4164% ( 1) 00:19:34.261 6.014 - 6.041: 99.4217% ( 1) 00:19:34.261 6.067 - 6.094: 99.4322% ( 2) 00:19:34.261 6.121 - 6.148: 99.4374% ( 1) 00:19:34.261 6.522 - 6.549: 99.4427% ( 1) 00:19:34.261 6.602 - 6.629: 99.4479% ( 1) 00:19:34.261 6.629 - 6.656: 99.4532% ( 1) 00:19:34.261 6.656 - 6.682: 99.4637% ( 2) 00:19:34.261 6.709 - 6.736: 99.4690% ( 1) 00:19:34.261 6.736 - 6.762: 99.4795% ( 2) 00:19:34.261 6.896 - 6.950: 99.4900% ( 2) 00:19:34.261 7.003 - 7.056: 99.4953% ( 1) 00:19:34.261 7.056 - 7.110: 99.5058% ( 2) 00:19:34.261 7.110 - 7.163: 99.5110% ( 1) 00:19:34.261 7.163 - 7.217: 99.5163% ( 1) 00:19:34.261 7.217 - 7.270: 99.5216% ( 1) 00:19:34.261 7.270 - 7.324: 99.5321% ( 2) 00:19:34.261 7.324 - 7.377: 99.5373% ( 1) 00:19:34.261 7.377 - 7.431: 99.5426% ( 1) 00:19:34.261 7.431 - 7.484: 99.5478% ( 1) 00:19:34.261 7.645 - 7.698: 99.5584% ( 2) 00:19:34.261 7.698 - 7.751: 99.5636% ( 1) 00:19:34.261 7.751 - 7.805: 99.5689% ( 1) 00:19:34.261 7.805 - 7.858: 99.5741% ( 1) 00:19:34.261 7.912 - 7.965: 99.5846% ( 2) 00:19:34.261 8.019 - 8.072: 99.5899% ( 1) 00:19:34.261 8.072 - 8.126: 99.5952% ( 1) 00:19:34.261 8.126 - 8.179: 99.6004% ( 1) 00:19:34.261 8.179 - 8.233: 99.6057% ( 1) 00:19:34.261 8.553 - 8.607: 99.6109% ( 1) 00:19:34.261 9.034 - 9.088: 99.6215% ( 2) 00:19:34.261 13.258 - 13.311: 99.6267% ( 1) 00:19:34.261 13.365 - 13.418: 99.6320% ( 1) 00:19:34.261 14.113 - 14.220: 99.6372% ( 1) 00:19:34.261 14.754 - 14.861: 99.6425% ( 1) 00:19:34.261 3749.763 - 3777.133: 99.6477% ( 1) 00:19:34.261 3996.098 - 4023.468: 99.9790% ( 63) 00:19:34.261 4023.468 - 4050.839: 99.9842% ( 1) 00:19:34.261 4078.209 - 4105.580: 99.9895% ( 1) 00:19:34.261 5994.146 - 6021.517: 99.9947% ( 1) 00:19:34.261 7006.856 - 7061.597: 100.0000% ( 1) 00:19:34.261 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:34.261 [ 00:19:34.261 { 00:19:34.261 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:34.261 "subtype": "Discovery", 00:19:34.261 "listen_addresses": [], 00:19:34.261 "allow_any_host": true, 00:19:34.261 "hosts": [] 00:19:34.261 }, 00:19:34.261 { 00:19:34.261 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:34.261 "subtype": "NVMe", 00:19:34.261 "listen_addresses": [ 00:19:34.261 { 00:19:34.261 "trtype": "VFIOUSER", 00:19:34.261 "adrfam": "IPv4", 00:19:34.261 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:34.261 "trsvcid": "0" 00:19:34.261 } 00:19:34.261 ], 00:19:34.261 "allow_any_host": true, 00:19:34.261 "hosts": [], 00:19:34.261 "serial_number": "SPDK1", 00:19:34.261 "model_number": "SPDK bdev Controller", 00:19:34.261 "max_namespaces": 32, 00:19:34.261 "min_cntlid": 1, 00:19:34.261 "max_cntlid": 65519, 00:19:34.261 "namespaces": [ 00:19:34.261 { 00:19:34.261 "nsid": 1, 00:19:34.261 "bdev_name": "Malloc1", 00:19:34.261 "name": "Malloc1", 00:19:34.261 "nguid": "3CA027B983B44E22A936D68E85C3632E", 00:19:34.261 "uuid": "3ca027b9-83b4-4e22-a936-d68e85c3632e" 00:19:34.261 }, 00:19:34.261 { 00:19:34.261 "nsid": 2, 00:19:34.261 "bdev_name": "Malloc3", 00:19:34.261 "name": "Malloc3", 00:19:34.261 "nguid": "DA528331C51C42609C9082B3A55055C8", 00:19:34.261 "uuid": "da528331-c51c-4260-9c90-82b3a55055c8" 00:19:34.261 } 00:19:34.261 ] 00:19:34.261 }, 00:19:34.261 { 00:19:34.261 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:34.261 "subtype": "NVMe", 00:19:34.261 "listen_addresses": [ 00:19:34.261 { 00:19:34.261 "trtype": "VFIOUSER", 00:19:34.261 "adrfam": "IPv4", 00:19:34.261 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:34.261 "trsvcid": "0" 00:19:34.261 } 00:19:34.261 ], 00:19:34.261 "allow_any_host": true, 00:19:34.261 "hosts": [], 00:19:34.261 "serial_number": "SPDK2", 00:19:34.261 "model_number": "SPDK bdev Controller", 00:19:34.261 "max_namespaces": 32, 00:19:34.261 "min_cntlid": 1, 00:19:34.261 "max_cntlid": 65519, 00:19:34.261 "namespaces": [ 00:19:34.261 { 00:19:34.261 "nsid": 1, 00:19:34.261 "bdev_name": "Malloc2", 00:19:34.261 "name": "Malloc2", 00:19:34.261 "nguid": "B5BBF2F91028402690D97DE99DF6611B", 00:19:34.261 "uuid": "b5bbf2f9-1028-4026-90d9-7de99df6611b" 00:19:34.261 } 00:19:34.261 ] 00:19:34.261 } 00:19:34.261 ] 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1841066 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:34.261 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:34.521 Malloc4 00:19:34.521 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:34.521 [2024-10-09 10:59:54.497833] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:34.779 [2024-10-09 10:59:54.598248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:34.779 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:34.779 Asynchronous Event Request test 00:19:34.779 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:34.779 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:34.779 Registering asynchronous event callbacks... 00:19:34.779 Starting namespace attribute notice tests for all controllers... 00:19:34.779 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:34.779 aer_cb - Changed Namespace 00:19:34.779 Cleaning up... 00:19:35.039 [ 00:19:35.039 { 00:19:35.039 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:35.039 "subtype": "Discovery", 00:19:35.039 "listen_addresses": [], 00:19:35.039 "allow_any_host": true, 00:19:35.039 "hosts": [] 00:19:35.039 }, 00:19:35.039 { 00:19:35.039 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:35.039 "subtype": "NVMe", 00:19:35.039 "listen_addresses": [ 00:19:35.039 { 00:19:35.039 "trtype": "VFIOUSER", 00:19:35.039 "adrfam": "IPv4", 00:19:35.039 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:35.039 "trsvcid": "0" 00:19:35.039 } 00:19:35.039 ], 00:19:35.039 "allow_any_host": true, 00:19:35.039 "hosts": [], 00:19:35.039 "serial_number": "SPDK1", 00:19:35.039 "model_number": "SPDK bdev Controller", 00:19:35.039 "max_namespaces": 32, 00:19:35.039 "min_cntlid": 1, 00:19:35.039 "max_cntlid": 65519, 00:19:35.039 "namespaces": [ 00:19:35.039 { 00:19:35.039 "nsid": 1, 00:19:35.039 "bdev_name": "Malloc1", 00:19:35.039 "name": "Malloc1", 00:19:35.039 "nguid": "3CA027B983B44E22A936D68E85C3632E", 00:19:35.039 "uuid": "3ca027b9-83b4-4e22-a936-d68e85c3632e" 00:19:35.039 }, 00:19:35.039 { 00:19:35.039 "nsid": 2, 00:19:35.039 "bdev_name": "Malloc3", 00:19:35.039 "name": "Malloc3", 00:19:35.039 "nguid": "DA528331C51C42609C9082B3A55055C8", 00:19:35.039 "uuid": "da528331-c51c-4260-9c90-82b3a55055c8" 00:19:35.039 } 00:19:35.039 ] 00:19:35.039 }, 00:19:35.039 { 00:19:35.039 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:35.039 "subtype": "NVMe", 00:19:35.039 "listen_addresses": [ 00:19:35.039 { 00:19:35.039 "trtype": "VFIOUSER", 00:19:35.039 "adrfam": "IPv4", 00:19:35.039 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:35.039 "trsvcid": "0" 00:19:35.039 } 00:19:35.039 ], 00:19:35.039 "allow_any_host": true, 00:19:35.039 "hosts": [], 00:19:35.039 "serial_number": "SPDK2", 00:19:35.039 "model_number": "SPDK bdev Controller", 00:19:35.039 "max_namespaces": 32, 00:19:35.039 "min_cntlid": 1, 00:19:35.039 "max_cntlid": 65519, 00:19:35.039 "namespaces": [ 00:19:35.039 { 00:19:35.039 "nsid": 1, 00:19:35.039 "bdev_name": "Malloc2", 00:19:35.039 "name": "Malloc2", 00:19:35.039 "nguid": "B5BBF2F91028402690D97DE99DF6611B", 00:19:35.039 "uuid": "b5bbf2f9-1028-4026-90d9-7de99df6611b" 00:19:35.039 }, 00:19:35.039 { 00:19:35.039 "nsid": 2, 00:19:35.039 "bdev_name": "Malloc4", 00:19:35.039 "name": "Malloc4", 00:19:35.039 "nguid": "C41BF7725882481CA8E8FB5C5E39E434", 00:19:35.039 "uuid": "c41bf772-5882-481c-a8e8-fb5c5e39e434" 00:19:35.039 } 00:19:35.039 ] 00:19:35.039 } 00:19:35.039 ] 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1841066 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1831718 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1831718 ']' 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1831718 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1831718 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1831718' 00:19:35.039 killing process with pid 1831718 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1831718 00:19:35.039 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1831718 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1841151 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1841151' 00:19:35.039 Process pid: 1841151 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1841151 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1841151 ']' 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.039 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:35.298 [2024-10-09 10:59:55.086117] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:35.298 [2024-10-09 10:59:55.087051] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:19:35.298 [2024-10-09 10:59:55.087092] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.298 [2024-10-09 10:59:55.217906] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:35.298 [2024-10-09 10:59:55.249959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.298 [2024-10-09 10:59:55.267880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.298 [2024-10-09 10:59:55.267911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.298 [2024-10-09 10:59:55.267919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.298 [2024-10-09 10:59:55.267929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.298 [2024-10-09 10:59:55.267935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.298 [2024-10-09 10:59:55.269419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.298 [2024-10-09 10:59:55.269548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.298 [2024-10-09 10:59:55.269855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.298 [2024-10-09 10:59:55.269856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.558 [2024-10-09 10:59:55.318239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:35.558 [2024-10-09 10:59:55.318297] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:35.558 [2024-10-09 10:59:55.319166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:35.558 [2024-10-09 10:59:55.319444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:35.558 [2024-10-09 10:59:55.319644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:36.125 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.125 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:19:36.125 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:37.065 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:37.065 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:37.326 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:37.326 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:37.326 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:37.326 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:37.326 Malloc1 00:19:37.326 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:37.587 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:37.848 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:37.848 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:37.848 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:38.108 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:38.108 Malloc2 00:19:38.109 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:38.369 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1841151 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1841151 ']' 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1841151 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.629 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1841151 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1841151' 00:19:38.890 killing process with pid 1841151 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1841151 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1841151 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:38.890 00:19:38.890 real 0m52.582s 00:19:38.890 user 3m21.378s 00:19:38.890 sys 0m2.746s 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:38.890 ************************************ 00:19:38.890 END TEST nvmf_vfio_user 00:19:38.890 ************************************ 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.890 ************************************ 00:19:38.890 START TEST nvmf_vfio_user_nvme_compliance 00:19:38.890 ************************************ 00:19:38.890 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:39.152 * Looking for test storage... 00:19:39.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:39.152 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:39.152 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:19:39.152 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.152 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:39.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.153 --rc genhtml_branch_coverage=1 00:19:39.153 --rc genhtml_function_coverage=1 00:19:39.153 --rc genhtml_legend=1 00:19:39.153 --rc geninfo_all_blocks=1 00:19:39.153 --rc geninfo_unexecuted_blocks=1 00:19:39.153 00:19:39.153 ' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:39.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.153 --rc genhtml_branch_coverage=1 00:19:39.153 --rc genhtml_function_coverage=1 00:19:39.153 --rc genhtml_legend=1 00:19:39.153 --rc geninfo_all_blocks=1 00:19:39.153 --rc geninfo_unexecuted_blocks=1 00:19:39.153 00:19:39.153 ' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:39.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.153 --rc genhtml_branch_coverage=1 00:19:39.153 --rc genhtml_function_coverage=1 00:19:39.153 --rc genhtml_legend=1 00:19:39.153 --rc geninfo_all_blocks=1 00:19:39.153 --rc geninfo_unexecuted_blocks=1 00:19:39.153 00:19:39.153 ' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:39.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.153 --rc genhtml_branch_coverage=1 00:19:39.153 --rc genhtml_function_coverage=1 00:19:39.153 --rc genhtml_legend=1 00:19:39.153 --rc geninfo_all_blocks=1 00:19:39.153 --rc geninfo_unexecuted_blocks=1 00:19:39.153 00:19:39.153 ' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1842159 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1842159' 00:19:39.153 Process pid: 1842159 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1842159 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1842159 ']' 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.153 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:39.414 [2024-10-09 10:59:59.153361] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:19:39.414 [2024-10-09 10:59:59.153438] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.414 [2024-10-09 10:59:59.288607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:39.414 [2024-10-09 10:59:59.320593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.414 [2024-10-09 10:59:59.343327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.414 [2024-10-09 10:59:59.343367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.414 [2024-10-09 10:59:59.343376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.414 [2024-10-09 10:59:59.343382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.414 [2024-10-09 10:59:59.343389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.414 [2024-10-09 10:59:59.344913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.414 [2024-10-09 10:59:59.345035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.414 [2024-10-09 10:59:59.345037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.984 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.984 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:19:39.984 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 malloc0 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.368 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.368 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:41.368 00:19:41.368 00:19:41.368 CUnit - A unit testing framework for C - Version 2.1-3 00:19:41.368 http://cunit.sourceforge.net/ 00:19:41.368 00:19:41.368 00:19:41.368 Suite: nvme_compliance 00:19:41.368 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-09 11:00:01.297773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.368 [2024-10-09 11:00:01.299118] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:41.368 [2024-10-09 11:00:01.299130] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:41.368 [2024-10-09 11:00:01.299134] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:41.368 [2024-10-09 11:00:01.300782] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.368 passed 00:19:41.628 Test: admin_identify_ctrlr_verify_fused ...[2024-10-09 11:00:01.395134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.628 [2024-10-09 11:00:01.398148] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.628 passed 00:19:41.628 Test: admin_identify_ns ...[2024-10-09 11:00:01.495686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.628 [2024-10-09 11:00:01.555475] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:41.628 [2024-10-09 11:00:01.563479] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:41.628 [2024-10-09 11:00:01.584582] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.628 passed 00:19:41.889 Test: admin_get_features_mandatory_features ...[2024-10-09 11:00:01.678074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.889 [2024-10-09 11:00:01.681081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.889 passed 00:19:41.889 Test: admin_get_features_optional_features ...[2024-10-09 11:00:01.775385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:41.889 [2024-10-09 11:00:01.778401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:41.889 passed 00:19:41.889 Test: admin_set_features_number_of_queues ...[2024-10-09 11:00:01.871346] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.149 [2024-10-09 11:00:01.977572] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.149 passed 00:19:42.149 Test: admin_get_log_page_mandatory_logs ...[2024-10-09 11:00:02.069085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.149 [2024-10-09 11:00:02.072101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.149 passed 00:19:42.410 Test: admin_get_log_page_with_lpo ...[2024-10-09 11:00:02.163044] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.410 [2024-10-09 11:00:02.234481] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:42.410 [2024-10-09 11:00:02.247519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.410 passed 00:19:42.410 Test: fabric_property_get ...[2024-10-09 11:00:02.339010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.410 [2024-10-09 11:00:02.340263] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:42.410 [2024-10-09 11:00:02.342019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.410 passed 00:19:42.671 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-09 11:00:02.435338] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.671 [2024-10-09 11:00:02.436584] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:42.671 [2024-10-09 11:00:02.438347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.671 passed 00:19:42.671 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-09 11:00:02.531270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.671 [2024-10-09 11:00:02.613481] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:42.671 [2024-10-09 11:00:02.629469] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:42.671 [2024-10-09 11:00:02.635544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.933 passed 00:19:42.933 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-09 11:00:02.731426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.933 [2024-10-09 11:00:02.732663] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:42.933 [2024-10-09 11:00:02.734439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:42.933 passed 00:19:42.933 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-09 11:00:02.824690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:42.933 [2024-10-09 11:00:02.904473] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:42.933 [2024-10-09 11:00:02.928474] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:42.933 [2024-10-09 11:00:02.934559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.193 passed 00:19:43.193 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-09 11:00:03.026026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:43.193 [2024-10-09 11:00:03.027265] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:43.193 [2024-10-09 11:00:03.027285] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:43.193 [2024-10-09 11:00:03.029035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.193 passed 00:19:43.193 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-09 11:00:03.122002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:43.453 [2024-10-09 11:00:03.213475] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:43.453 [2024-10-09 11:00:03.221470] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:43.453 [2024-10-09 11:00:03.229474] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:43.453 [2024-10-09 11:00:03.237472] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:43.453 [2024-10-09 11:00:03.267546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.453 passed 00:19:43.453 Test: admin_create_io_sq_verify_pc ...[2024-10-09 11:00:03.362447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:43.453 [2024-10-09 11:00:03.389480] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:43.453 [2024-10-09 11:00:03.407543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:43.453 passed 00:19:43.713 Test: admin_create_io_qp_max_qps ...[2024-10-09 11:00:03.497861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:44.688 [2024-10-09 11:00:04.597475] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:19:45.260 [2024-10-09 11:00:04.986974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:45.260 passed 00:19:45.260 Test: admin_create_io_sq_shared_cq ...[2024-10-09 11:00:05.077681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:45.260 [2024-10-09 11:00:05.209472] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:45.260 [2024-10-09 11:00:05.246529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:45.521 passed 00:19:45.521 00:19:45.521 Run Summary: Type Total Ran Passed Failed Inactive 00:19:45.521 suites 1 1 n/a 0 0 00:19:45.521 tests 18 18 18 0 0 00:19:45.521 asserts 360 360 360 0 n/a 00:19:45.521 00:19:45.521 Elapsed time = 1.656 seconds 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1842159 ']' 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1842159' 00:19:45.521 killing process with pid 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1842159 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:45.521 00:19:45.521 real 0m6.635s 00:19:45.521 user 0m18.601s 00:19:45.521 sys 0m0.546s 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.521 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:45.521 ************************************ 00:19:45.521 END TEST nvmf_vfio_user_nvme_compliance 00:19:45.521 ************************************ 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.782 ************************************ 00:19:45.782 START TEST nvmf_vfio_user_fuzz 00:19:45.782 ************************************ 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:45.782 * Looking for test storage... 00:19:45.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.782 --rc genhtml_branch_coverage=1 00:19:45.782 --rc genhtml_function_coverage=1 00:19:45.782 --rc genhtml_legend=1 00:19:45.782 --rc geninfo_all_blocks=1 00:19:45.782 --rc geninfo_unexecuted_blocks=1 00:19:45.782 00:19:45.782 ' 00:19:45.782 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.782 --rc genhtml_branch_coverage=1 00:19:45.782 --rc genhtml_function_coverage=1 00:19:45.782 --rc genhtml_legend=1 00:19:45.782 --rc geninfo_all_blocks=1 00:19:45.782 --rc geninfo_unexecuted_blocks=1 00:19:45.782 00:19:45.782 ' 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.783 --rc genhtml_branch_coverage=1 00:19:45.783 --rc genhtml_function_coverage=1 00:19:45.783 --rc genhtml_legend=1 00:19:45.783 --rc geninfo_all_blocks=1 00:19:45.783 --rc geninfo_unexecuted_blocks=1 00:19:45.783 00:19:45.783 ' 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:45.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.783 --rc genhtml_branch_coverage=1 00:19:45.783 --rc genhtml_function_coverage=1 00:19:45.783 --rc genhtml_legend=1 00:19:45.783 --rc geninfo_all_blocks=1 00:19:45.783 --rc geninfo_unexecuted_blocks=1 00:19:45.783 00:19:45.783 ' 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.783 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.044 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1843566 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1843566' 00:19:46.045 Process pid: 1843566 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1843566 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1843566 ']' 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.045 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:46.986 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.986 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:46.986 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.927 malloc0 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.927 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:47.928 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:20.071 Fuzzing completed. Shutting down the fuzz application 00:20:20.071 00:20:20.071 Dumping successful admin opcodes: 00:20:20.071 8, 9, 10, 24, 00:20:20.071 Dumping successful io opcodes: 00:20:20.071 0, 00:20:20.071 NS: 0x20000081ef00 I/O qp, Total commands completed: 1061712, total successful commands: 4193, random_seed: 2462357056 00:20:20.071 NS: 0x20000081ef00 admin qp, Total commands completed: 133486, total successful commands: 1082, random_seed: 1532399296 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1843566 ']' 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1843566' 00:20:20.071 killing process with pid 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1843566 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:20.071 00:20:20.071 real 0m33.750s 00:20:20.071 user 0m37.935s 00:20:20.071 sys 0m24.648s 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.071 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:20.071 ************************************ 00:20:20.071 END TEST nvmf_vfio_user_fuzz 00:20:20.071 ************************************ 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:20.072 ************************************ 00:20:20.072 START TEST nvmf_auth_target 00:20:20.072 ************************************ 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:20.072 * Looking for test storage... 00:20:20.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.072 --rc genhtml_branch_coverage=1 00:20:20.072 --rc genhtml_function_coverage=1 00:20:20.072 --rc genhtml_legend=1 00:20:20.072 --rc geninfo_all_blocks=1 00:20:20.072 --rc geninfo_unexecuted_blocks=1 00:20:20.072 00:20:20.072 ' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.072 --rc genhtml_branch_coverage=1 00:20:20.072 --rc genhtml_function_coverage=1 00:20:20.072 --rc genhtml_legend=1 00:20:20.072 --rc geninfo_all_blocks=1 00:20:20.072 --rc geninfo_unexecuted_blocks=1 00:20:20.072 00:20:20.072 ' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.072 --rc genhtml_branch_coverage=1 00:20:20.072 --rc genhtml_function_coverage=1 00:20:20.072 --rc genhtml_legend=1 00:20:20.072 --rc geninfo_all_blocks=1 00:20:20.072 --rc geninfo_unexecuted_blocks=1 00:20:20.072 00:20:20.072 ' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.072 --rc genhtml_branch_coverage=1 00:20:20.072 --rc genhtml_function_coverage=1 00:20:20.072 --rc genhtml_legend=1 00:20:20.072 --rc geninfo_all_blocks=1 00:20:20.072 --rc geninfo_unexecuted_blocks=1 00:20:20.072 00:20:20.072 ' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:20.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:20.072 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:20.073 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:28.282 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:28.282 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:28.282 Found net devices under 0000:31:00.0: cvl_0_0 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:28.282 Found net devices under 0000:31:00.1: cvl_0_1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.282 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:20:28.282 00:20:28.282 --- 10.0.0.2 ping statistics --- 00:20:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.282 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:20:28.282 00:20:28.282 --- 10.0.0.1 ping statistics --- 00:20:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.282 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1854258 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1854258 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1854258 ']' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.282 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.282 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.282 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:28.282 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1854513 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a6a04354f709491333868fbb9bc1e1cbed02cd5ed814ba45 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.G6N 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a6a04354f709491333868fbb9bc1e1cbed02cd5ed814ba45 0 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a6a04354f709491333868fbb9bc1e1cbed02cd5ed814ba45 0 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a6a04354f709491333868fbb9bc1e1cbed02cd5ed814ba45 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.G6N 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.G6N 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.G6N 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6213df089665d625643f518e640ad89998154ac4022a4fc826ace287c68d70da 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.QiW 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6213df089665d625643f518e640ad89998154ac4022a4fc826ace287c68d70da 3 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6213df089665d625643f518e640ad89998154ac4022a4fc826ace287c68d70da 3 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6213df089665d625643f518e640ad89998154ac4022a4fc826ace287c68d70da 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.QiW 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.QiW 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.QiW 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f4b38b42f31979891715ba9cd05dea62 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.crj 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f4b38b42f31979891715ba9cd05dea62 1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f4b38b42f31979891715ba9cd05dea62 1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f4b38b42f31979891715ba9cd05dea62 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:28.283 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.544 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.crj 00:20:28.544 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.crj 00:20:28.544 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.crj 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5716a6c0d4ddb249540d1b36496209685a20278389b5e474 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.xsq 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5716a6c0d4ddb249540d1b36496209685a20278389b5e474 2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5716a6c0d4ddb249540d1b36496209685a20278389b5e474 2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5716a6c0d4ddb249540d1b36496209685a20278389b5e474 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.xsq 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.xsq 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.xsq 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cbb4a6e954ea949f28d4beb86cb5440aa5621d8be750ed2d 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.JV9 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cbb4a6e954ea949f28d4beb86cb5440aa5621d8be750ed2d 2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cbb4a6e954ea949f28d4beb86cb5440aa5621d8be750ed2d 2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cbb4a6e954ea949f28d4beb86cb5440aa5621d8be750ed2d 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.JV9 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.JV9 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.JV9 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9e365b2d117e9287e188f05627558972 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.38e 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9e365b2d117e9287e188f05627558972 1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9e365b2d117e9287e188f05627558972 1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9e365b2d117e9287e188f05627558972 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.38e 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.38e 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.38e 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=17ada1ed5f4bd3d7cc520abc4ff58dbcfbe73cd94c201e09527f16bc06e0beda 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.IUF 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 17ada1ed5f4bd3d7cc520abc4ff58dbcfbe73cd94c201e09527f16bc06e0beda 3 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 17ada1ed5f4bd3d7cc520abc4ff58dbcfbe73cd94c201e09527f16bc06e0beda 3 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=17ada1ed5f4bd3d7cc520abc4ff58dbcfbe73cd94c201e09527f16bc06e0beda 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:20:28.545 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.IUF 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.IUF 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.IUF 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1854258 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1854258 ']' 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1854513 /var/tmp/host.sock 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1854513 ']' 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:28.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.807 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G6N 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.G6N 00:20:29.067 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.G6N 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.QiW ]] 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QiW 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QiW 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QiW 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.crj 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.crj 00:20:29.328 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.crj 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.xsq ]] 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xsq 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xsq 00:20:29.587 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xsq 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JV9 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.JV9 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.JV9 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.38e ]] 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.38e 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.38e 00:20:29.848 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.38e 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IUF 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.IUF 00:20:30.109 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.IUF 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:30.109 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.369 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.370 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.370 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.370 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.370 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.370 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.631 00:20:30.631 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.631 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.631 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.892 { 00:20:30.892 "cntlid": 1, 00:20:30.892 "qid": 0, 00:20:30.892 "state": "enabled", 00:20:30.892 "thread": "nvmf_tgt_poll_group_000", 00:20:30.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:30.892 "listen_address": { 00:20:30.892 "trtype": "TCP", 00:20:30.892 "adrfam": "IPv4", 00:20:30.892 "traddr": "10.0.0.2", 00:20:30.892 "trsvcid": "4420" 00:20:30.892 }, 00:20:30.892 "peer_address": { 00:20:30.892 "trtype": "TCP", 00:20:30.892 "adrfam": "IPv4", 00:20:30.892 "traddr": "10.0.0.1", 00:20:30.892 "trsvcid": "46450" 00:20:30.892 }, 00:20:30.892 "auth": { 00:20:30.892 "state": "completed", 00:20:30.892 "digest": "sha256", 00:20:30.892 "dhgroup": "null" 00:20:30.892 } 00:20:30.892 } 00:20:30.892 ]' 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.892 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.153 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:31.154 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.094 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.355 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.355 { 00:20:32.355 "cntlid": 3, 00:20:32.355 "qid": 0, 00:20:32.355 "state": "enabled", 00:20:32.355 "thread": "nvmf_tgt_poll_group_000", 00:20:32.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:32.355 "listen_address": { 00:20:32.355 "trtype": "TCP", 00:20:32.355 "adrfam": "IPv4", 00:20:32.355 "traddr": "10.0.0.2", 00:20:32.355 "trsvcid": "4420" 00:20:32.355 }, 00:20:32.355 "peer_address": { 00:20:32.355 "trtype": "TCP", 00:20:32.355 "adrfam": "IPv4", 00:20:32.355 "traddr": "10.0.0.1", 00:20:32.355 "trsvcid": "46468" 00:20:32.355 }, 00:20:32.355 "auth": { 00:20:32.355 "state": "completed", 00:20:32.355 "digest": "sha256", 00:20:32.355 "dhgroup": "null" 00:20:32.355 } 00:20:32.355 } 00:20:32.355 ]' 00:20:32.355 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.615 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.616 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.877 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:32.877 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:33.447 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.707 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.708 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.968 00:20:33.968 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.968 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.968 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.229 { 00:20:34.229 "cntlid": 5, 00:20:34.229 "qid": 0, 00:20:34.229 "state": "enabled", 00:20:34.229 "thread": "nvmf_tgt_poll_group_000", 00:20:34.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:34.229 "listen_address": { 00:20:34.229 "trtype": "TCP", 00:20:34.229 "adrfam": "IPv4", 00:20:34.229 "traddr": "10.0.0.2", 00:20:34.229 "trsvcid": "4420" 00:20:34.229 }, 00:20:34.229 "peer_address": { 00:20:34.229 "trtype": "TCP", 00:20:34.229 "adrfam": "IPv4", 00:20:34.229 "traddr": "10.0.0.1", 00:20:34.229 "trsvcid": "46500" 00:20:34.229 }, 00:20:34.229 "auth": { 00:20:34.229 "state": "completed", 00:20:34.229 "digest": "sha256", 00:20:34.229 "dhgroup": "null" 00:20:34.229 } 00:20:34.229 } 00:20:34.229 ]' 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.229 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.490 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:34.490 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.431 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.432 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.432 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.432 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.432 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.692 00:20:35.692 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.692 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.692 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.953 { 00:20:35.953 "cntlid": 7, 00:20:35.953 "qid": 0, 00:20:35.953 "state": "enabled", 00:20:35.953 "thread": "nvmf_tgt_poll_group_000", 00:20:35.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:35.953 "listen_address": { 00:20:35.953 "trtype": "TCP", 00:20:35.953 "adrfam": "IPv4", 00:20:35.953 "traddr": "10.0.0.2", 00:20:35.953 "trsvcid": "4420" 00:20:35.953 }, 00:20:35.953 "peer_address": { 00:20:35.953 "trtype": "TCP", 00:20:35.953 "adrfam": "IPv4", 00:20:35.953 "traddr": "10.0.0.1", 00:20:35.953 "trsvcid": "46522" 00:20:35.953 }, 00:20:35.953 "auth": { 00:20:35.953 "state": "completed", 00:20:35.953 "digest": "sha256", 00:20:35.953 "dhgroup": "null" 00:20:35.953 } 00:20:35.953 } 00:20:35.953 ]' 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.953 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.214 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:36.214 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.155 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.155 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.155 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.155 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.155 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.416 00:20:37.416 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.416 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.416 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.677 { 00:20:37.677 "cntlid": 9, 00:20:37.677 "qid": 0, 00:20:37.677 "state": "enabled", 00:20:37.677 "thread": "nvmf_tgt_poll_group_000", 00:20:37.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:37.677 "listen_address": { 00:20:37.677 "trtype": "TCP", 00:20:37.677 "adrfam": "IPv4", 00:20:37.677 "traddr": "10.0.0.2", 00:20:37.677 "trsvcid": "4420" 00:20:37.677 }, 00:20:37.677 "peer_address": { 00:20:37.677 "trtype": "TCP", 00:20:37.677 "adrfam": "IPv4", 00:20:37.677 "traddr": "10.0.0.1", 00:20:37.677 "trsvcid": "55472" 00:20:37.677 }, 00:20:37.677 "auth": { 00:20:37.677 "state": "completed", 00:20:37.677 "digest": "sha256", 00:20:37.677 "dhgroup": "ffdhe2048" 00:20:37.677 } 00:20:37.677 } 00:20:37.677 ]' 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.677 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.938 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:37.938 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.878 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.138 00:20:39.138 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.138 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.138 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.138 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.138 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.138 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.138 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.398 { 00:20:39.398 "cntlid": 11, 00:20:39.398 "qid": 0, 00:20:39.398 "state": "enabled", 00:20:39.398 "thread": "nvmf_tgt_poll_group_000", 00:20:39.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:39.398 "listen_address": { 00:20:39.398 "trtype": "TCP", 00:20:39.398 "adrfam": "IPv4", 00:20:39.398 "traddr": "10.0.0.2", 00:20:39.398 "trsvcid": "4420" 00:20:39.398 }, 00:20:39.398 "peer_address": { 00:20:39.398 "trtype": "TCP", 00:20:39.398 "adrfam": "IPv4", 00:20:39.398 "traddr": "10.0.0.1", 00:20:39.398 "trsvcid": "55500" 00:20:39.398 }, 00:20:39.398 "auth": { 00:20:39.398 "state": "completed", 00:20:39.398 "digest": "sha256", 00:20:39.398 "dhgroup": "ffdhe2048" 00:20:39.398 } 00:20:39.398 } 00:20:39.398 ]' 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.398 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.399 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.659 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:39.659 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:40.229 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.489 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.490 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.750 00:20:40.750 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.750 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.750 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.011 { 00:20:41.011 "cntlid": 13, 00:20:41.011 "qid": 0, 00:20:41.011 "state": "enabled", 00:20:41.011 "thread": "nvmf_tgt_poll_group_000", 00:20:41.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:41.011 "listen_address": { 00:20:41.011 "trtype": "TCP", 00:20:41.011 "adrfam": "IPv4", 00:20:41.011 "traddr": "10.0.0.2", 00:20:41.011 "trsvcid": "4420" 00:20:41.011 }, 00:20:41.011 "peer_address": { 00:20:41.011 "trtype": "TCP", 00:20:41.011 "adrfam": "IPv4", 00:20:41.011 "traddr": "10.0.0.1", 00:20:41.011 "trsvcid": "55536" 00:20:41.011 }, 00:20:41.011 "auth": { 00:20:41.011 "state": "completed", 00:20:41.011 "digest": "sha256", 00:20:41.011 "dhgroup": "ffdhe2048" 00:20:41.011 } 00:20:41.011 } 00:20:41.011 ]' 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.011 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.273 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.273 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.273 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.273 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:41.273 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:42.215 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.215 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.477 00:20:42.477 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.477 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.477 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.739 { 00:20:42.739 "cntlid": 15, 00:20:42.739 "qid": 0, 00:20:42.739 "state": "enabled", 00:20:42.739 "thread": "nvmf_tgt_poll_group_000", 00:20:42.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:42.739 "listen_address": { 00:20:42.739 "trtype": "TCP", 00:20:42.739 "adrfam": "IPv4", 00:20:42.739 "traddr": "10.0.0.2", 00:20:42.739 "trsvcid": "4420" 00:20:42.739 }, 00:20:42.739 "peer_address": { 00:20:42.739 "trtype": "TCP", 00:20:42.739 "adrfam": "IPv4", 00:20:42.739 "traddr": "10.0.0.1", 00:20:42.739 "trsvcid": "55556" 00:20:42.739 }, 00:20:42.739 "auth": { 00:20:42.739 "state": "completed", 00:20:42.739 "digest": "sha256", 00:20:42.739 "dhgroup": "ffdhe2048" 00:20:42.739 } 00:20:42.739 } 00:20:42.739 ]' 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.739 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.001 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:43.001 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.942 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.203 00:20:44.203 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.203 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.203 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.464 { 00:20:44.464 "cntlid": 17, 00:20:44.464 "qid": 0, 00:20:44.464 "state": "enabled", 00:20:44.464 "thread": "nvmf_tgt_poll_group_000", 00:20:44.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:44.464 "listen_address": { 00:20:44.464 "trtype": "TCP", 00:20:44.464 "adrfam": "IPv4", 00:20:44.464 "traddr": "10.0.0.2", 00:20:44.464 "trsvcid": "4420" 00:20:44.464 }, 00:20:44.464 "peer_address": { 00:20:44.464 "trtype": "TCP", 00:20:44.464 "adrfam": "IPv4", 00:20:44.464 "traddr": "10.0.0.1", 00:20:44.464 "trsvcid": "55584" 00:20:44.464 }, 00:20:44.464 "auth": { 00:20:44.464 "state": "completed", 00:20:44.464 "digest": "sha256", 00:20:44.464 "dhgroup": "ffdhe3072" 00:20:44.464 } 00:20:44.464 } 00:20:44.464 ]' 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.464 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.725 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:44.725 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:45.296 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.558 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.819 00:20:45.819 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.819 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.819 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.080 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.080 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.080 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.080 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.080 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.081 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.081 { 00:20:46.081 "cntlid": 19, 00:20:46.081 "qid": 0, 00:20:46.081 "state": "enabled", 00:20:46.081 "thread": "nvmf_tgt_poll_group_000", 00:20:46.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:46.081 "listen_address": { 00:20:46.081 "trtype": "TCP", 00:20:46.081 "adrfam": "IPv4", 00:20:46.081 "traddr": "10.0.0.2", 00:20:46.081 "trsvcid": "4420" 00:20:46.081 }, 00:20:46.081 "peer_address": { 00:20:46.081 "trtype": "TCP", 00:20:46.081 "adrfam": "IPv4", 00:20:46.081 "traddr": "10.0.0.1", 00:20:46.081 "trsvcid": "55608" 00:20:46.081 }, 00:20:46.081 "auth": { 00:20:46.081 "state": "completed", 00:20:46.081 "digest": "sha256", 00:20:46.081 "dhgroup": "ffdhe3072" 00:20:46.081 } 00:20:46.081 } 00:20:46.081 ]' 00:20:46.081 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.081 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.081 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.081 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.081 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.341 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.341 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.341 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.341 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:46.341 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.282 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.542 00:20:47.542 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.542 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.542 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.802 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.802 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.803 { 00:20:47.803 "cntlid": 21, 00:20:47.803 "qid": 0, 00:20:47.803 "state": "enabled", 00:20:47.803 "thread": "nvmf_tgt_poll_group_000", 00:20:47.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:47.803 "listen_address": { 00:20:47.803 "trtype": "TCP", 00:20:47.803 "adrfam": "IPv4", 00:20:47.803 "traddr": "10.0.0.2", 00:20:47.803 "trsvcid": "4420" 00:20:47.803 }, 00:20:47.803 "peer_address": { 00:20:47.803 "trtype": "TCP", 00:20:47.803 "adrfam": "IPv4", 00:20:47.803 "traddr": "10.0.0.1", 00:20:47.803 "trsvcid": "39996" 00:20:47.803 }, 00:20:47.803 "auth": { 00:20:47.803 "state": "completed", 00:20:47.803 "digest": "sha256", 00:20:47.803 "dhgroup": "ffdhe3072" 00:20:47.803 } 00:20:47.803 } 00:20:47.803 ]' 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.803 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.063 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.063 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.063 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.063 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:48.063 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.003 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.265 00:20:49.265 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.265 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.265 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.525 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.526 { 00:20:49.526 "cntlid": 23, 00:20:49.526 "qid": 0, 00:20:49.526 "state": "enabled", 00:20:49.526 "thread": "nvmf_tgt_poll_group_000", 00:20:49.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:49.526 "listen_address": { 00:20:49.526 "trtype": "TCP", 00:20:49.526 "adrfam": "IPv4", 00:20:49.526 "traddr": "10.0.0.2", 00:20:49.526 "trsvcid": "4420" 00:20:49.526 }, 00:20:49.526 "peer_address": { 00:20:49.526 "trtype": "TCP", 00:20:49.526 "adrfam": "IPv4", 00:20:49.526 "traddr": "10.0.0.1", 00:20:49.526 "trsvcid": "40030" 00:20:49.526 }, 00:20:49.526 "auth": { 00:20:49.526 "state": "completed", 00:20:49.526 "digest": "sha256", 00:20:49.526 "dhgroup": "ffdhe3072" 00:20:49.526 } 00:20:49.526 } 00:20:49.526 ]' 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.526 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.786 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.786 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.786 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.786 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:49.786 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:50.727 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.728 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.988 00:20:50.988 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.988 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.988 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.260 { 00:20:51.260 "cntlid": 25, 00:20:51.260 "qid": 0, 00:20:51.260 "state": "enabled", 00:20:51.260 "thread": "nvmf_tgt_poll_group_000", 00:20:51.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:51.260 "listen_address": { 00:20:51.260 "trtype": "TCP", 00:20:51.260 "adrfam": "IPv4", 00:20:51.260 "traddr": "10.0.0.2", 00:20:51.260 "trsvcid": "4420" 00:20:51.260 }, 00:20:51.260 "peer_address": { 00:20:51.260 "trtype": "TCP", 00:20:51.260 "adrfam": "IPv4", 00:20:51.260 "traddr": "10.0.0.1", 00:20:51.260 "trsvcid": "40052" 00:20:51.260 }, 00:20:51.260 "auth": { 00:20:51.260 "state": "completed", 00:20:51.260 "digest": "sha256", 00:20:51.260 "dhgroup": "ffdhe4096" 00:20:51.260 } 00:20:51.260 } 00:20:51.260 ]' 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.260 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.520 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:51.520 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.461 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.721 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.721 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.721 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.721 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.721 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.983 { 00:20:52.983 "cntlid": 27, 00:20:52.983 "qid": 0, 00:20:52.983 "state": "enabled", 00:20:52.983 "thread": "nvmf_tgt_poll_group_000", 00:20:52.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:52.983 "listen_address": { 00:20:52.983 "trtype": "TCP", 00:20:52.983 "adrfam": "IPv4", 00:20:52.983 "traddr": "10.0.0.2", 00:20:52.983 "trsvcid": "4420" 00:20:52.983 }, 00:20:52.983 "peer_address": { 00:20:52.983 "trtype": "TCP", 00:20:52.983 "adrfam": "IPv4", 00:20:52.983 "traddr": "10.0.0.1", 00:20:52.983 "trsvcid": "40094" 00:20:52.983 }, 00:20:52.983 "auth": { 00:20:52.983 "state": "completed", 00:20:52.983 "digest": "sha256", 00:20:52.983 "dhgroup": "ffdhe4096" 00:20:52.983 } 00:20:52.983 } 00:20:52.983 ]' 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.983 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.245 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:53.245 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:20:54.186 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.186 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.447 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.708 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.708 { 00:20:54.708 "cntlid": 29, 00:20:54.708 "qid": 0, 00:20:54.708 "state": "enabled", 00:20:54.708 "thread": "nvmf_tgt_poll_group_000", 00:20:54.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:54.708 "listen_address": { 00:20:54.708 "trtype": "TCP", 00:20:54.708 "adrfam": "IPv4", 00:20:54.708 "traddr": "10.0.0.2", 00:20:54.708 "trsvcid": "4420" 00:20:54.708 }, 00:20:54.708 "peer_address": { 00:20:54.708 "trtype": "TCP", 00:20:54.708 "adrfam": "IPv4", 00:20:54.708 "traddr": "10.0.0.1", 00:20:54.708 "trsvcid": "40118" 00:20:54.708 }, 00:20:54.708 "auth": { 00:20:54.708 "state": "completed", 00:20:54.708 "digest": "sha256", 00:20:54.708 "dhgroup": "ffdhe4096" 00:20:54.708 } 00:20:54.708 } 00:20:54.708 ]' 00:20:54.708 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.969 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.230 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:55.230 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:55.800 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.064 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:56.065 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.065 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:56.328 00:20:56.328 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.328 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.328 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.588 { 00:20:56.588 "cntlid": 31, 00:20:56.588 "qid": 0, 00:20:56.588 "state": "enabled", 00:20:56.588 "thread": "nvmf_tgt_poll_group_000", 00:20:56.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:56.588 "listen_address": { 00:20:56.588 "trtype": "TCP", 00:20:56.588 "adrfam": "IPv4", 00:20:56.588 "traddr": "10.0.0.2", 00:20:56.588 "trsvcid": "4420" 00:20:56.588 }, 00:20:56.588 "peer_address": { 00:20:56.588 "trtype": "TCP", 00:20:56.588 "adrfam": "IPv4", 00:20:56.588 "traddr": "10.0.0.1", 00:20:56.588 "trsvcid": "33672" 00:20:56.588 }, 00:20:56.588 "auth": { 00:20:56.588 "state": "completed", 00:20:56.588 "digest": "sha256", 00:20:56.588 "dhgroup": "ffdhe4096" 00:20:56.588 } 00:20:56.588 } 00:20:56.588 ]' 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.588 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.589 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.849 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:56.849 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.789 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.790 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.050 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.310 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.310 { 00:20:58.310 "cntlid": 33, 00:20:58.310 "qid": 0, 00:20:58.310 "state": "enabled", 00:20:58.310 "thread": "nvmf_tgt_poll_group_000", 00:20:58.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:58.311 "listen_address": { 00:20:58.311 "trtype": "TCP", 00:20:58.311 "adrfam": "IPv4", 00:20:58.311 "traddr": "10.0.0.2", 00:20:58.311 "trsvcid": "4420" 00:20:58.311 }, 00:20:58.311 "peer_address": { 00:20:58.311 "trtype": "TCP", 00:20:58.311 "adrfam": "IPv4", 00:20:58.311 "traddr": "10.0.0.1", 00:20:58.311 "trsvcid": "33684" 00:20:58.311 }, 00:20:58.311 "auth": { 00:20:58.311 "state": "completed", 00:20:58.311 "digest": "sha256", 00:20:58.311 "dhgroup": "ffdhe6144" 00:20:58.311 } 00:20:58.311 } 00:20:58.311 ]' 00:20:58.311 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.311 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:58.311 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.571 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:58.572 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:59.514 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.515 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.515 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.515 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.775 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.775 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.775 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.036 00:21:00.036 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.036 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.036 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.296 { 00:21:00.296 "cntlid": 35, 00:21:00.296 "qid": 0, 00:21:00.296 "state": "enabled", 00:21:00.296 "thread": "nvmf_tgt_poll_group_000", 00:21:00.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:00.296 "listen_address": { 00:21:00.296 "trtype": "TCP", 00:21:00.296 "adrfam": "IPv4", 00:21:00.296 "traddr": "10.0.0.2", 00:21:00.296 "trsvcid": "4420" 00:21:00.296 }, 00:21:00.296 "peer_address": { 00:21:00.296 "trtype": "TCP", 00:21:00.296 "adrfam": "IPv4", 00:21:00.296 "traddr": "10.0.0.1", 00:21:00.296 "trsvcid": "33704" 00:21:00.296 }, 00:21:00.296 "auth": { 00:21:00.296 "state": "completed", 00:21:00.296 "digest": "sha256", 00:21:00.296 "dhgroup": "ffdhe6144" 00:21:00.296 } 00:21:00.296 } 00:21:00.296 ]' 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.296 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.557 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:00.557 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.500 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.761 00:21:01.761 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.761 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.761 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.022 { 00:21:02.022 "cntlid": 37, 00:21:02.022 "qid": 0, 00:21:02.022 "state": "enabled", 00:21:02.022 "thread": "nvmf_tgt_poll_group_000", 00:21:02.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:02.022 "listen_address": { 00:21:02.022 "trtype": "TCP", 00:21:02.022 "adrfam": "IPv4", 00:21:02.022 "traddr": "10.0.0.2", 00:21:02.022 "trsvcid": "4420" 00:21:02.022 }, 00:21:02.022 "peer_address": { 00:21:02.022 "trtype": "TCP", 00:21:02.022 "adrfam": "IPv4", 00:21:02.022 "traddr": "10.0.0.1", 00:21:02.022 "trsvcid": "33742" 00:21:02.022 }, 00:21:02.022 "auth": { 00:21:02.022 "state": "completed", 00:21:02.022 "digest": "sha256", 00:21:02.022 "dhgroup": "ffdhe6144" 00:21:02.022 } 00:21:02.022 } 00:21:02.022 ]' 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.022 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.022 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.022 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.283 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.283 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.283 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.283 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:02.283 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:03.225 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.225 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.802 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.802 { 00:21:03.802 "cntlid": 39, 00:21:03.802 "qid": 0, 00:21:03.802 "state": "enabled", 00:21:03.802 "thread": "nvmf_tgt_poll_group_000", 00:21:03.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:03.802 "listen_address": { 00:21:03.802 "trtype": "TCP", 00:21:03.802 "adrfam": "IPv4", 00:21:03.802 "traddr": "10.0.0.2", 00:21:03.802 "trsvcid": "4420" 00:21:03.802 }, 00:21:03.802 "peer_address": { 00:21:03.802 "trtype": "TCP", 00:21:03.802 "adrfam": "IPv4", 00:21:03.802 "traddr": "10.0.0.1", 00:21:03.802 "trsvcid": "33774" 00:21:03.802 }, 00:21:03.802 "auth": { 00:21:03.802 "state": "completed", 00:21:03.802 "digest": "sha256", 00:21:03.802 "dhgroup": "ffdhe6144" 00:21:03.802 } 00:21:03.802 } 00:21:03.802 ]' 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.802 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.108 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.108 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.108 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.108 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.108 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.108 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:04.108 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.101 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.671 00:21:05.671 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.671 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.671 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.931 { 00:21:05.931 "cntlid": 41, 00:21:05.931 "qid": 0, 00:21:05.931 "state": "enabled", 00:21:05.931 "thread": "nvmf_tgt_poll_group_000", 00:21:05.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:05.931 "listen_address": { 00:21:05.931 "trtype": "TCP", 00:21:05.931 "adrfam": "IPv4", 00:21:05.931 "traddr": "10.0.0.2", 00:21:05.931 "trsvcid": "4420" 00:21:05.931 }, 00:21:05.931 "peer_address": { 00:21:05.931 "trtype": "TCP", 00:21:05.931 "adrfam": "IPv4", 00:21:05.931 "traddr": "10.0.0.1", 00:21:05.931 "trsvcid": "33820" 00:21:05.931 }, 00:21:05.931 "auth": { 00:21:05.931 "state": "completed", 00:21:05.931 "digest": "sha256", 00:21:05.931 "dhgroup": "ffdhe8192" 00:21:05.931 } 00:21:05.931 } 00:21:05.931 ]' 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.931 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.191 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:06.191 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:06.763 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.025 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.597 00:21:07.597 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.597 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.597 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.857 { 00:21:07.857 "cntlid": 43, 00:21:07.857 "qid": 0, 00:21:07.857 "state": "enabled", 00:21:07.857 "thread": "nvmf_tgt_poll_group_000", 00:21:07.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:07.857 "listen_address": { 00:21:07.857 "trtype": "TCP", 00:21:07.857 "adrfam": "IPv4", 00:21:07.857 "traddr": "10.0.0.2", 00:21:07.857 "trsvcid": "4420" 00:21:07.857 }, 00:21:07.857 "peer_address": { 00:21:07.857 "trtype": "TCP", 00:21:07.857 "adrfam": "IPv4", 00:21:07.857 "traddr": "10.0.0.1", 00:21:07.857 "trsvcid": "49112" 00:21:07.857 }, 00:21:07.857 "auth": { 00:21:07.857 "state": "completed", 00:21:07.857 "digest": "sha256", 00:21:07.857 "dhgroup": "ffdhe8192" 00:21:07.857 } 00:21:07.857 } 00:21:07.857 ]' 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.857 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.118 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:08.118 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.061 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.632 00:21:09.632 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.632 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.632 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.893 { 00:21:09.893 "cntlid": 45, 00:21:09.893 "qid": 0, 00:21:09.893 "state": "enabled", 00:21:09.893 "thread": "nvmf_tgt_poll_group_000", 00:21:09.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:09.893 "listen_address": { 00:21:09.893 "trtype": "TCP", 00:21:09.893 "adrfam": "IPv4", 00:21:09.893 "traddr": "10.0.0.2", 00:21:09.893 "trsvcid": "4420" 00:21:09.893 }, 00:21:09.893 "peer_address": { 00:21:09.893 "trtype": "TCP", 00:21:09.893 "adrfam": "IPv4", 00:21:09.893 "traddr": "10.0.0.1", 00:21:09.893 "trsvcid": "49124" 00:21:09.893 }, 00:21:09.893 "auth": { 00:21:09.893 "state": "completed", 00:21:09.893 "digest": "sha256", 00:21:09.893 "dhgroup": "ffdhe8192" 00:21:09.893 } 00:21:09.893 } 00:21:09.893 ]' 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.893 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.153 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:10.153 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.093 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.094 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.664 00:21:11.664 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.664 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.664 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.924 { 00:21:11.924 "cntlid": 47, 00:21:11.924 "qid": 0, 00:21:11.924 "state": "enabled", 00:21:11.924 "thread": "nvmf_tgt_poll_group_000", 00:21:11.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:11.924 "listen_address": { 00:21:11.924 "trtype": "TCP", 00:21:11.924 "adrfam": "IPv4", 00:21:11.924 "traddr": "10.0.0.2", 00:21:11.924 "trsvcid": "4420" 00:21:11.924 }, 00:21:11.924 "peer_address": { 00:21:11.924 "trtype": "TCP", 00:21:11.924 "adrfam": "IPv4", 00:21:11.924 "traddr": "10.0.0.1", 00:21:11.924 "trsvcid": "49140" 00:21:11.924 }, 00:21:11.924 "auth": { 00:21:11.924 "state": "completed", 00:21:11.924 "digest": "sha256", 00:21:11.924 "dhgroup": "ffdhe8192" 00:21:11.924 } 00:21:11.924 } 00:21:11.924 ]' 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.924 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.184 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:12.184 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:12.754 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.014 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.275 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.275 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.275 { 00:21:13.275 "cntlid": 49, 00:21:13.275 "qid": 0, 00:21:13.275 "state": "enabled", 00:21:13.275 "thread": "nvmf_tgt_poll_group_000", 00:21:13.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:13.276 "listen_address": { 00:21:13.276 "trtype": "TCP", 00:21:13.276 "adrfam": "IPv4", 00:21:13.276 "traddr": "10.0.0.2", 00:21:13.276 "trsvcid": "4420" 00:21:13.276 }, 00:21:13.276 "peer_address": { 00:21:13.276 "trtype": "TCP", 00:21:13.276 "adrfam": "IPv4", 00:21:13.276 "traddr": "10.0.0.1", 00:21:13.276 "trsvcid": "49176" 00:21:13.276 }, 00:21:13.276 "auth": { 00:21:13.276 "state": "completed", 00:21:13.276 "digest": "sha384", 00:21:13.276 "dhgroup": "null" 00:21:13.276 } 00:21:13.276 } 00:21:13.276 ]' 00:21:13.276 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.535 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.795 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:13.795 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:14.366 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.626 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.886 00:21:14.886 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.886 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.886 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.147 { 00:21:15.147 "cntlid": 51, 00:21:15.147 "qid": 0, 00:21:15.147 "state": "enabled", 00:21:15.147 "thread": "nvmf_tgt_poll_group_000", 00:21:15.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:15.147 "listen_address": { 00:21:15.147 "trtype": "TCP", 00:21:15.147 "adrfam": "IPv4", 00:21:15.147 "traddr": "10.0.0.2", 00:21:15.147 "trsvcid": "4420" 00:21:15.147 }, 00:21:15.147 "peer_address": { 00:21:15.147 "trtype": "TCP", 00:21:15.147 "adrfam": "IPv4", 00:21:15.147 "traddr": "10.0.0.1", 00:21:15.147 "trsvcid": "49192" 00:21:15.147 }, 00:21:15.147 "auth": { 00:21:15.147 "state": "completed", 00:21:15.147 "digest": "sha384", 00:21:15.147 "dhgroup": "null" 00:21:15.147 } 00:21:15.147 } 00:21:15.147 ]' 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.147 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.147 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:15.147 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.147 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.147 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.147 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.407 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:15.407 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:16.349 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:16.349 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.350 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.610 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.610 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.870 { 00:21:16.870 "cntlid": 53, 00:21:16.870 "qid": 0, 00:21:16.870 "state": "enabled", 00:21:16.870 "thread": "nvmf_tgt_poll_group_000", 00:21:16.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:16.870 "listen_address": { 00:21:16.870 "trtype": "TCP", 00:21:16.870 "adrfam": "IPv4", 00:21:16.870 "traddr": "10.0.0.2", 00:21:16.870 "trsvcid": "4420" 00:21:16.870 }, 00:21:16.870 "peer_address": { 00:21:16.870 "trtype": "TCP", 00:21:16.870 "adrfam": "IPv4", 00:21:16.870 "traddr": "10.0.0.1", 00:21:16.870 "trsvcid": "39982" 00:21:16.870 }, 00:21:16.870 "auth": { 00:21:16.870 "state": "completed", 00:21:16.870 "digest": "sha384", 00:21:16.870 "dhgroup": "null" 00:21:16.870 } 00:21:16.870 } 00:21:16.870 ]' 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.870 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.130 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:17.130 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:17.700 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.960 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.221 00:21:18.221 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.221 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.221 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.481 { 00:21:18.481 "cntlid": 55, 00:21:18.481 "qid": 0, 00:21:18.481 "state": "enabled", 00:21:18.481 "thread": "nvmf_tgt_poll_group_000", 00:21:18.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:18.481 "listen_address": { 00:21:18.481 "trtype": "TCP", 00:21:18.481 "adrfam": "IPv4", 00:21:18.481 "traddr": "10.0.0.2", 00:21:18.481 "trsvcid": "4420" 00:21:18.481 }, 00:21:18.481 "peer_address": { 00:21:18.481 "trtype": "TCP", 00:21:18.481 "adrfam": "IPv4", 00:21:18.481 "traddr": "10.0.0.1", 00:21:18.481 "trsvcid": "39998" 00:21:18.481 }, 00:21:18.481 "auth": { 00:21:18.481 "state": "completed", 00:21:18.481 "digest": "sha384", 00:21:18.481 "dhgroup": "null" 00:21:18.481 } 00:21:18.481 } 00:21:18.481 ]' 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.481 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.740 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:18.740 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.679 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.939 00:21:19.939 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.939 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.939 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.199 { 00:21:20.199 "cntlid": 57, 00:21:20.199 "qid": 0, 00:21:20.199 "state": "enabled", 00:21:20.199 "thread": "nvmf_tgt_poll_group_000", 00:21:20.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:20.199 "listen_address": { 00:21:20.199 "trtype": "TCP", 00:21:20.199 "adrfam": "IPv4", 00:21:20.199 "traddr": "10.0.0.2", 00:21:20.199 "trsvcid": "4420" 00:21:20.199 }, 00:21:20.199 "peer_address": { 00:21:20.199 "trtype": "TCP", 00:21:20.199 "adrfam": "IPv4", 00:21:20.199 "traddr": "10.0.0.1", 00:21:20.199 "trsvcid": "40028" 00:21:20.199 }, 00:21:20.199 "auth": { 00:21:20.199 "state": "completed", 00:21:20.199 "digest": "sha384", 00:21:20.199 "dhgroup": "ffdhe2048" 00:21:20.199 } 00:21:20.199 } 00:21:20.199 ]' 00:21:20.199 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.199 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.199 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.200 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.200 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.200 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.200 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.200 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.460 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:20.460 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.400 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.401 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.663 00:21:21.663 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.663 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.663 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.923 { 00:21:21.923 "cntlid": 59, 00:21:21.923 "qid": 0, 00:21:21.923 "state": "enabled", 00:21:21.923 "thread": "nvmf_tgt_poll_group_000", 00:21:21.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:21.923 "listen_address": { 00:21:21.923 "trtype": "TCP", 00:21:21.923 "adrfam": "IPv4", 00:21:21.923 "traddr": "10.0.0.2", 00:21:21.923 "trsvcid": "4420" 00:21:21.923 }, 00:21:21.923 "peer_address": { 00:21:21.923 "trtype": "TCP", 00:21:21.923 "adrfam": "IPv4", 00:21:21.923 "traddr": "10.0.0.1", 00:21:21.923 "trsvcid": "40056" 00:21:21.923 }, 00:21:21.923 "auth": { 00:21:21.923 "state": "completed", 00:21:21.923 "digest": "sha384", 00:21:21.923 "dhgroup": "ffdhe2048" 00:21:21.923 } 00:21:21.923 } 00:21:21.923 ]' 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.923 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.184 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:22.184 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.125 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.385 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.385 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.644 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.644 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.644 { 00:21:23.644 "cntlid": 61, 00:21:23.644 "qid": 0, 00:21:23.644 "state": "enabled", 00:21:23.644 "thread": "nvmf_tgt_poll_group_000", 00:21:23.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:23.644 "listen_address": { 00:21:23.644 "trtype": "TCP", 00:21:23.644 "adrfam": "IPv4", 00:21:23.645 "traddr": "10.0.0.2", 00:21:23.645 "trsvcid": "4420" 00:21:23.645 }, 00:21:23.645 "peer_address": { 00:21:23.645 "trtype": "TCP", 00:21:23.645 "adrfam": "IPv4", 00:21:23.645 "traddr": "10.0.0.1", 00:21:23.645 "trsvcid": "40084" 00:21:23.645 }, 00:21:23.645 "auth": { 00:21:23.645 "state": "completed", 00:21:23.645 "digest": "sha384", 00:21:23.645 "dhgroup": "ffdhe2048" 00:21:23.645 } 00:21:23.645 } 00:21:23.645 ]' 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.645 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.904 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:23.904 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.473 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.734 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.994 00:21:24.994 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.994 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.994 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.254 { 00:21:25.254 "cntlid": 63, 00:21:25.254 "qid": 0, 00:21:25.254 "state": "enabled", 00:21:25.254 "thread": "nvmf_tgt_poll_group_000", 00:21:25.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:25.254 "listen_address": { 00:21:25.254 "trtype": "TCP", 00:21:25.254 "adrfam": "IPv4", 00:21:25.254 "traddr": "10.0.0.2", 00:21:25.254 "trsvcid": "4420" 00:21:25.254 }, 00:21:25.254 "peer_address": { 00:21:25.254 "trtype": "TCP", 00:21:25.254 "adrfam": "IPv4", 00:21:25.254 "traddr": "10.0.0.1", 00:21:25.254 "trsvcid": "40108" 00:21:25.254 }, 00:21:25.254 "auth": { 00:21:25.254 "state": "completed", 00:21:25.254 "digest": "sha384", 00:21:25.254 "dhgroup": "ffdhe2048" 00:21:25.254 } 00:21:25.254 } 00:21:25.254 ]' 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.254 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.513 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:25.513 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.452 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.453 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.713 00:21:26.713 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.713 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.713 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.973 { 00:21:26.973 "cntlid": 65, 00:21:26.973 "qid": 0, 00:21:26.973 "state": "enabled", 00:21:26.973 "thread": "nvmf_tgt_poll_group_000", 00:21:26.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:26.973 "listen_address": { 00:21:26.973 "trtype": "TCP", 00:21:26.973 "adrfam": "IPv4", 00:21:26.973 "traddr": "10.0.0.2", 00:21:26.973 "trsvcid": "4420" 00:21:26.973 }, 00:21:26.973 "peer_address": { 00:21:26.973 "trtype": "TCP", 00:21:26.973 "adrfam": "IPv4", 00:21:26.973 "traddr": "10.0.0.1", 00:21:26.973 "trsvcid": "41548" 00:21:26.973 }, 00:21:26.973 "auth": { 00:21:26.973 "state": "completed", 00:21:26.973 "digest": "sha384", 00:21:26.973 "dhgroup": "ffdhe3072" 00:21:26.973 } 00:21:26.973 } 00:21:26.973 ]' 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.973 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.233 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:27.233 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:28.174 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.174 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.175 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.175 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.435 00:21:28.435 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.435 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.435 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.695 { 00:21:28.695 "cntlid": 67, 00:21:28.695 "qid": 0, 00:21:28.695 "state": "enabled", 00:21:28.695 "thread": "nvmf_tgt_poll_group_000", 00:21:28.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:28.695 "listen_address": { 00:21:28.695 "trtype": "TCP", 00:21:28.695 "adrfam": "IPv4", 00:21:28.695 "traddr": "10.0.0.2", 00:21:28.695 "trsvcid": "4420" 00:21:28.695 }, 00:21:28.695 "peer_address": { 00:21:28.695 "trtype": "TCP", 00:21:28.695 "adrfam": "IPv4", 00:21:28.695 "traddr": "10.0.0.1", 00:21:28.695 "trsvcid": "41570" 00:21:28.695 }, 00:21:28.695 "auth": { 00:21:28.695 "state": "completed", 00:21:28.695 "digest": "sha384", 00:21:28.695 "dhgroup": "ffdhe3072" 00:21:28.695 } 00:21:28.695 } 00:21:28.695 ]' 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.695 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.957 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:28.957 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:29.528 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.789 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.049 00:21:30.049 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.049 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.049 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.310 { 00:21:30.310 "cntlid": 69, 00:21:30.310 "qid": 0, 00:21:30.310 "state": "enabled", 00:21:30.310 "thread": "nvmf_tgt_poll_group_000", 00:21:30.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:30.310 "listen_address": { 00:21:30.310 "trtype": "TCP", 00:21:30.310 "adrfam": "IPv4", 00:21:30.310 "traddr": "10.0.0.2", 00:21:30.310 "trsvcid": "4420" 00:21:30.310 }, 00:21:30.310 "peer_address": { 00:21:30.310 "trtype": "TCP", 00:21:30.310 "adrfam": "IPv4", 00:21:30.310 "traddr": "10.0.0.1", 00:21:30.310 "trsvcid": "41598" 00:21:30.310 }, 00:21:30.310 "auth": { 00:21:30.310 "state": "completed", 00:21:30.310 "digest": "sha384", 00:21:30.310 "dhgroup": "ffdhe3072" 00:21:30.310 } 00:21:30.310 } 00:21:30.310 ]' 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.310 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.570 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.570 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.570 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.570 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:30.570 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.510 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.769 00:21:31.770 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.770 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.770 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.030 { 00:21:32.030 "cntlid": 71, 00:21:32.030 "qid": 0, 00:21:32.030 "state": "enabled", 00:21:32.030 "thread": "nvmf_tgt_poll_group_000", 00:21:32.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:32.030 "listen_address": { 00:21:32.030 "trtype": "TCP", 00:21:32.030 "adrfam": "IPv4", 00:21:32.030 "traddr": "10.0.0.2", 00:21:32.030 "trsvcid": "4420" 00:21:32.030 }, 00:21:32.030 "peer_address": { 00:21:32.030 "trtype": "TCP", 00:21:32.030 "adrfam": "IPv4", 00:21:32.030 "traddr": "10.0.0.1", 00:21:32.030 "trsvcid": "41630" 00:21:32.030 }, 00:21:32.030 "auth": { 00:21:32.030 "state": "completed", 00:21:32.030 "digest": "sha384", 00:21:32.030 "dhgroup": "ffdhe3072" 00:21:32.030 } 00:21:32.030 } 00:21:32.030 ]' 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.030 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.030 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.030 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.030 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.291 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:32.291 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:33.231 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.231 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.491 00:21:33.491 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.491 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.491 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.750 { 00:21:33.750 "cntlid": 73, 00:21:33.750 "qid": 0, 00:21:33.750 "state": "enabled", 00:21:33.750 "thread": "nvmf_tgt_poll_group_000", 00:21:33.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:33.750 "listen_address": { 00:21:33.750 "trtype": "TCP", 00:21:33.750 "adrfam": "IPv4", 00:21:33.750 "traddr": "10.0.0.2", 00:21:33.750 "trsvcid": "4420" 00:21:33.750 }, 00:21:33.750 "peer_address": { 00:21:33.750 "trtype": "TCP", 00:21:33.750 "adrfam": "IPv4", 00:21:33.750 "traddr": "10.0.0.1", 00:21:33.750 "trsvcid": "41656" 00:21:33.750 }, 00:21:33.750 "auth": { 00:21:33.750 "state": "completed", 00:21:33.750 "digest": "sha384", 00:21:33.750 "dhgroup": "ffdhe4096" 00:21:33.750 } 00:21:33.750 } 00:21:33.750 ]' 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.750 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.751 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.010 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:34.010 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.949 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.209 00:21:35.209 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.209 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.209 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.469 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.469 { 00:21:35.469 "cntlid": 75, 00:21:35.469 "qid": 0, 00:21:35.469 "state": "enabled", 00:21:35.469 "thread": "nvmf_tgt_poll_group_000", 00:21:35.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:35.469 "listen_address": { 00:21:35.469 "trtype": "TCP", 00:21:35.469 "adrfam": "IPv4", 00:21:35.469 "traddr": "10.0.0.2", 00:21:35.469 "trsvcid": "4420" 00:21:35.469 }, 00:21:35.469 "peer_address": { 00:21:35.469 "trtype": "TCP", 00:21:35.469 "adrfam": "IPv4", 00:21:35.469 "traddr": "10.0.0.1", 00:21:35.469 "trsvcid": "41696" 00:21:35.469 }, 00:21:35.469 "auth": { 00:21:35.470 "state": "completed", 00:21:35.470 "digest": "sha384", 00:21:35.470 "dhgroup": "ffdhe4096" 00:21:35.470 } 00:21:35.470 } 00:21:35.470 ]' 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.470 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.730 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:35.730 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.669 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.930 00:21:36.930 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.930 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.930 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.190 { 00:21:37.190 "cntlid": 77, 00:21:37.190 "qid": 0, 00:21:37.190 "state": "enabled", 00:21:37.190 "thread": "nvmf_tgt_poll_group_000", 00:21:37.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:37.190 "listen_address": { 00:21:37.190 "trtype": "TCP", 00:21:37.190 "adrfam": "IPv4", 00:21:37.190 "traddr": "10.0.0.2", 00:21:37.190 "trsvcid": "4420" 00:21:37.190 }, 00:21:37.190 "peer_address": { 00:21:37.190 "trtype": "TCP", 00:21:37.190 "adrfam": "IPv4", 00:21:37.190 "traddr": "10.0.0.1", 00:21:37.190 "trsvcid": "45754" 00:21:37.190 }, 00:21:37.190 "auth": { 00:21:37.190 "state": "completed", 00:21:37.190 "digest": "sha384", 00:21:37.190 "dhgroup": "ffdhe4096" 00:21:37.190 } 00:21:37.190 } 00:21:37.190 ]' 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.190 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.451 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.451 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.451 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.451 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:37.451 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.393 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.654 00:21:38.654 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.654 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.654 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.914 { 00:21:38.914 "cntlid": 79, 00:21:38.914 "qid": 0, 00:21:38.914 "state": "enabled", 00:21:38.914 "thread": "nvmf_tgt_poll_group_000", 00:21:38.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:38.914 "listen_address": { 00:21:38.914 "trtype": "TCP", 00:21:38.914 "adrfam": "IPv4", 00:21:38.914 "traddr": "10.0.0.2", 00:21:38.914 "trsvcid": "4420" 00:21:38.914 }, 00:21:38.914 "peer_address": { 00:21:38.914 "trtype": "TCP", 00:21:38.914 "adrfam": "IPv4", 00:21:38.914 "traddr": "10.0.0.1", 00:21:38.914 "trsvcid": "45766" 00:21:38.914 }, 00:21:38.914 "auth": { 00:21:38.914 "state": "completed", 00:21:38.914 "digest": "sha384", 00:21:38.914 "dhgroup": "ffdhe4096" 00:21:38.914 } 00:21:38.914 } 00:21:38.914 ]' 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.914 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.175 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.175 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.175 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.175 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:39.175 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.115 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.115 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.687 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.687 { 00:21:40.687 "cntlid": 81, 00:21:40.687 "qid": 0, 00:21:40.687 "state": "enabled", 00:21:40.687 "thread": "nvmf_tgt_poll_group_000", 00:21:40.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:40.687 "listen_address": { 00:21:40.687 "trtype": "TCP", 00:21:40.687 "adrfam": "IPv4", 00:21:40.687 "traddr": "10.0.0.2", 00:21:40.687 "trsvcid": "4420" 00:21:40.687 }, 00:21:40.687 "peer_address": { 00:21:40.687 "trtype": "TCP", 00:21:40.687 "adrfam": "IPv4", 00:21:40.687 "traddr": "10.0.0.1", 00:21:40.687 "trsvcid": "45786" 00:21:40.687 }, 00:21:40.687 "auth": { 00:21:40.687 "state": "completed", 00:21:40.687 "digest": "sha384", 00:21:40.687 "dhgroup": "ffdhe6144" 00:21:40.687 } 00:21:40.687 } 00:21:40.687 ]' 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.687 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:40.947 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.888 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.458 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.458 { 00:21:42.458 "cntlid": 83, 00:21:42.458 "qid": 0, 00:21:42.458 "state": "enabled", 00:21:42.458 "thread": "nvmf_tgt_poll_group_000", 00:21:42.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:42.458 "listen_address": { 00:21:42.458 "trtype": "TCP", 00:21:42.458 "adrfam": "IPv4", 00:21:42.458 "traddr": "10.0.0.2", 00:21:42.458 "trsvcid": "4420" 00:21:42.458 }, 00:21:42.458 "peer_address": { 00:21:42.458 "trtype": "TCP", 00:21:42.458 "adrfam": "IPv4", 00:21:42.458 "traddr": "10.0.0.1", 00:21:42.458 "trsvcid": "45824" 00:21:42.458 }, 00:21:42.458 "auth": { 00:21:42.458 "state": "completed", 00:21:42.458 "digest": "sha384", 00:21:42.458 "dhgroup": "ffdhe6144" 00:21:42.458 } 00:21:42.458 } 00:21:42.458 ]' 00:21:42.458 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:42.718 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.658 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.934 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.283 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.283 { 00:21:44.283 "cntlid": 85, 00:21:44.283 "qid": 0, 00:21:44.283 "state": "enabled", 00:21:44.283 "thread": "nvmf_tgt_poll_group_000", 00:21:44.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:44.283 "listen_address": { 00:21:44.283 "trtype": "TCP", 00:21:44.283 "adrfam": "IPv4", 00:21:44.283 "traddr": "10.0.0.2", 00:21:44.283 "trsvcid": "4420" 00:21:44.283 }, 00:21:44.283 "peer_address": { 00:21:44.283 "trtype": "TCP", 00:21:44.283 "adrfam": "IPv4", 00:21:44.283 "traddr": "10.0.0.1", 00:21:44.283 "trsvcid": "45866" 00:21:44.283 }, 00:21:44.283 "auth": { 00:21:44.283 "state": "completed", 00:21:44.283 "digest": "sha384", 00:21:44.283 "dhgroup": "ffdhe6144" 00:21:44.283 } 00:21:44.283 } 00:21:44.283 ]' 00:21:44.283 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:44.603 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.559 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.129 00:21:46.129 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.129 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.129 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.129 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.129 { 00:21:46.130 "cntlid": 87, 00:21:46.130 "qid": 0, 00:21:46.130 "state": "enabled", 00:21:46.130 "thread": "nvmf_tgt_poll_group_000", 00:21:46.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:46.130 "listen_address": { 00:21:46.130 "trtype": "TCP", 00:21:46.130 "adrfam": "IPv4", 00:21:46.130 "traddr": "10.0.0.2", 00:21:46.130 "trsvcid": "4420" 00:21:46.130 }, 00:21:46.130 "peer_address": { 00:21:46.130 "trtype": "TCP", 00:21:46.130 "adrfam": "IPv4", 00:21:46.130 "traddr": "10.0.0.1", 00:21:46.130 "trsvcid": "45892" 00:21:46.130 }, 00:21:46.130 "auth": { 00:21:46.130 "state": "completed", 00:21:46.130 "digest": "sha384", 00:21:46.130 "dhgroup": "ffdhe6144" 00:21:46.130 } 00:21:46.130 } 00:21:46.130 ]' 00:21:46.130 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.130 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.130 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:46.389 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.328 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.899 00:21:47.899 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.899 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.899 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.161 { 00:21:48.161 "cntlid": 89, 00:21:48.161 "qid": 0, 00:21:48.161 "state": "enabled", 00:21:48.161 "thread": "nvmf_tgt_poll_group_000", 00:21:48.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:48.161 "listen_address": { 00:21:48.161 "trtype": "TCP", 00:21:48.161 "adrfam": "IPv4", 00:21:48.161 "traddr": "10.0.0.2", 00:21:48.161 "trsvcid": "4420" 00:21:48.161 }, 00:21:48.161 "peer_address": { 00:21:48.161 "trtype": "TCP", 00:21:48.161 "adrfam": "IPv4", 00:21:48.161 "traddr": "10.0.0.1", 00:21:48.161 "trsvcid": "58664" 00:21:48.161 }, 00:21:48.161 "auth": { 00:21:48.161 "state": "completed", 00:21:48.161 "digest": "sha384", 00:21:48.161 "dhgroup": "ffdhe8192" 00:21:48.161 } 00:21:48.161 } 00:21:48.161 ]' 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.161 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:48.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.359 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.929 00:21:49.929 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.929 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.929 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.188 { 00:21:50.188 "cntlid": 91, 00:21:50.188 "qid": 0, 00:21:50.188 "state": "enabled", 00:21:50.188 "thread": "nvmf_tgt_poll_group_000", 00:21:50.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:50.188 "listen_address": { 00:21:50.188 "trtype": "TCP", 00:21:50.188 "adrfam": "IPv4", 00:21:50.188 "traddr": "10.0.0.2", 00:21:50.188 "trsvcid": "4420" 00:21:50.188 }, 00:21:50.188 "peer_address": { 00:21:50.188 "trtype": "TCP", 00:21:50.188 "adrfam": "IPv4", 00:21:50.188 "traddr": "10.0.0.1", 00:21:50.188 "trsvcid": "58690" 00:21:50.188 }, 00:21:50.188 "auth": { 00:21:50.188 "state": "completed", 00:21:50.188 "digest": "sha384", 00:21:50.188 "dhgroup": "ffdhe8192" 00:21:50.188 } 00:21:50.188 } 00:21:50.188 ]' 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.188 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.448 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:50.448 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.390 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.960 00:21:51.960 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.960 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.960 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.220 { 00:21:52.220 "cntlid": 93, 00:21:52.220 "qid": 0, 00:21:52.220 "state": "enabled", 00:21:52.220 "thread": "nvmf_tgt_poll_group_000", 00:21:52.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:52.220 "listen_address": { 00:21:52.220 "trtype": "TCP", 00:21:52.220 "adrfam": "IPv4", 00:21:52.220 "traddr": "10.0.0.2", 00:21:52.220 "trsvcid": "4420" 00:21:52.220 }, 00:21:52.220 "peer_address": { 00:21:52.220 "trtype": "TCP", 00:21:52.220 "adrfam": "IPv4", 00:21:52.220 "traddr": "10.0.0.1", 00:21:52.220 "trsvcid": "58724" 00:21:52.220 }, 00:21:52.220 "auth": { 00:21:52.220 "state": "completed", 00:21:52.220 "digest": "sha384", 00:21:52.220 "dhgroup": "ffdhe8192" 00:21:52.220 } 00:21:52.220 } 00:21:52.220 ]' 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.220 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.480 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:52.480 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:53.050 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.310 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.879 00:21:53.879 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.879 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.879 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.139 { 00:21:54.139 "cntlid": 95, 00:21:54.139 "qid": 0, 00:21:54.139 "state": "enabled", 00:21:54.139 "thread": "nvmf_tgt_poll_group_000", 00:21:54.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:54.139 "listen_address": { 00:21:54.139 "trtype": "TCP", 00:21:54.139 "adrfam": "IPv4", 00:21:54.139 "traddr": "10.0.0.2", 00:21:54.139 "trsvcid": "4420" 00:21:54.139 }, 00:21:54.139 "peer_address": { 00:21:54.139 "trtype": "TCP", 00:21:54.139 "adrfam": "IPv4", 00:21:54.139 "traddr": "10.0.0.1", 00:21:54.139 "trsvcid": "58754" 00:21:54.139 }, 00:21:54.139 "auth": { 00:21:54.139 "state": "completed", 00:21:54.139 "digest": "sha384", 00:21:54.139 "dhgroup": "ffdhe8192" 00:21:54.139 } 00:21:54.139 } 00:21:54.139 ]' 00:21:54.139 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.139 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.399 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:54.399 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.338 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.597 00:21:55.597 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.598 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.598 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.857 { 00:21:55.857 "cntlid": 97, 00:21:55.857 "qid": 0, 00:21:55.857 "state": "enabled", 00:21:55.857 "thread": "nvmf_tgt_poll_group_000", 00:21:55.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:55.857 "listen_address": { 00:21:55.857 "trtype": "TCP", 00:21:55.857 "adrfam": "IPv4", 00:21:55.857 "traddr": "10.0.0.2", 00:21:55.857 "trsvcid": "4420" 00:21:55.857 }, 00:21:55.857 "peer_address": { 00:21:55.857 "trtype": "TCP", 00:21:55.857 "adrfam": "IPv4", 00:21:55.857 "traddr": "10.0.0.1", 00:21:55.857 "trsvcid": "58780" 00:21:55.857 }, 00:21:55.857 "auth": { 00:21:55.857 "state": "completed", 00:21:55.857 "digest": "sha512", 00:21:55.857 "dhgroup": "null" 00:21:55.857 } 00:21:55.857 } 00:21:55.857 ]' 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.857 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.115 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:56.115 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:21:56.684 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.944 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.204 00:21:57.204 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.204 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.204 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.464 { 00:21:57.464 "cntlid": 99, 00:21:57.464 "qid": 0, 00:21:57.464 "state": "enabled", 00:21:57.464 "thread": "nvmf_tgt_poll_group_000", 00:21:57.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:57.464 "listen_address": { 00:21:57.464 "trtype": "TCP", 00:21:57.464 "adrfam": "IPv4", 00:21:57.464 "traddr": "10.0.0.2", 00:21:57.464 "trsvcid": "4420" 00:21:57.464 }, 00:21:57.464 "peer_address": { 00:21:57.464 "trtype": "TCP", 00:21:57.464 "adrfam": "IPv4", 00:21:57.464 "traddr": "10.0.0.1", 00:21:57.464 "trsvcid": "58616" 00:21:57.464 }, 00:21:57.464 "auth": { 00:21:57.464 "state": "completed", 00:21:57.464 "digest": "sha512", 00:21:57.464 "dhgroup": "null" 00:21:57.464 } 00:21:57.464 } 00:21:57.464 ]' 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:57.464 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.724 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.724 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.724 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.724 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:57.724 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.665 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.666 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.926 00:21:58.926 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.926 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.926 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.187 { 00:21:59.187 "cntlid": 101, 00:21:59.187 "qid": 0, 00:21:59.187 "state": "enabled", 00:21:59.187 "thread": "nvmf_tgt_poll_group_000", 00:21:59.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:59.187 "listen_address": { 00:21:59.187 "trtype": "TCP", 00:21:59.187 "adrfam": "IPv4", 00:21:59.187 "traddr": "10.0.0.2", 00:21:59.187 "trsvcid": "4420" 00:21:59.187 }, 00:21:59.187 "peer_address": { 00:21:59.187 "trtype": "TCP", 00:21:59.187 "adrfam": "IPv4", 00:21:59.187 "traddr": "10.0.0.1", 00:21:59.187 "trsvcid": "58666" 00:21:59.187 }, 00:21:59.187 "auth": { 00:21:59.187 "state": "completed", 00:21:59.187 "digest": "sha512", 00:21:59.187 "dhgroup": "null" 00:21:59.187 } 00:21:59.187 } 00:21:59.187 ]' 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.187 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.448 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:21:59.448 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.389 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.649 00:22:00.649 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.649 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.649 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.909 { 00:22:00.909 "cntlid": 103, 00:22:00.909 "qid": 0, 00:22:00.909 "state": "enabled", 00:22:00.909 "thread": "nvmf_tgt_poll_group_000", 00:22:00.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:00.909 "listen_address": { 00:22:00.909 "trtype": "TCP", 00:22:00.909 "adrfam": "IPv4", 00:22:00.909 "traddr": "10.0.0.2", 00:22:00.909 "trsvcid": "4420" 00:22:00.909 }, 00:22:00.909 "peer_address": { 00:22:00.909 "trtype": "TCP", 00:22:00.909 "adrfam": "IPv4", 00:22:00.909 "traddr": "10.0.0.1", 00:22:00.909 "trsvcid": "58688" 00:22:00.909 }, 00:22:00.909 "auth": { 00:22:00.909 "state": "completed", 00:22:00.909 "digest": "sha512", 00:22:00.909 "dhgroup": "null" 00:22:00.909 } 00:22:00.909 } 00:22:00.909 ]' 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.909 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.170 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:01.170 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.112 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.112 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.372 00:22:02.372 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.372 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.372 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.632 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.632 { 00:22:02.632 "cntlid": 105, 00:22:02.632 "qid": 0, 00:22:02.632 "state": "enabled", 00:22:02.632 "thread": "nvmf_tgt_poll_group_000", 00:22:02.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:02.632 "listen_address": { 00:22:02.632 "trtype": "TCP", 00:22:02.632 "adrfam": "IPv4", 00:22:02.632 "traddr": "10.0.0.2", 00:22:02.632 "trsvcid": "4420" 00:22:02.632 }, 00:22:02.632 "peer_address": { 00:22:02.632 "trtype": "TCP", 00:22:02.632 "adrfam": "IPv4", 00:22:02.632 "traddr": "10.0.0.1", 00:22:02.632 "trsvcid": "58702" 00:22:02.632 }, 00:22:02.632 "auth": { 00:22:02.632 "state": "completed", 00:22:02.632 "digest": "sha512", 00:22:02.633 "dhgroup": "ffdhe2048" 00:22:02.633 } 00:22:02.633 } 00:22:02.633 ]' 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.633 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.893 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:02.893 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.832 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.092 00:22:04.092 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.092 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.092 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.351 { 00:22:04.351 "cntlid": 107, 00:22:04.351 "qid": 0, 00:22:04.351 "state": "enabled", 00:22:04.351 "thread": "nvmf_tgt_poll_group_000", 00:22:04.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:04.351 "listen_address": { 00:22:04.351 "trtype": "TCP", 00:22:04.351 "adrfam": "IPv4", 00:22:04.351 "traddr": "10.0.0.2", 00:22:04.351 "trsvcid": "4420" 00:22:04.351 }, 00:22:04.351 "peer_address": { 00:22:04.351 "trtype": "TCP", 00:22:04.351 "adrfam": "IPv4", 00:22:04.351 "traddr": "10.0.0.1", 00:22:04.351 "trsvcid": "58712" 00:22:04.351 }, 00:22:04.351 "auth": { 00:22:04.351 "state": "completed", 00:22:04.351 "digest": "sha512", 00:22:04.351 "dhgroup": "ffdhe2048" 00:22:04.351 } 00:22:04.351 } 00:22:04.351 ]' 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.351 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.611 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:04.611 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.548 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.808 00:22:05.808 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.808 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.808 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.068 { 00:22:06.068 "cntlid": 109, 00:22:06.068 "qid": 0, 00:22:06.068 "state": "enabled", 00:22:06.068 "thread": "nvmf_tgt_poll_group_000", 00:22:06.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:06.068 "listen_address": { 00:22:06.068 "trtype": "TCP", 00:22:06.068 "adrfam": "IPv4", 00:22:06.068 "traddr": "10.0.0.2", 00:22:06.068 "trsvcid": "4420" 00:22:06.068 }, 00:22:06.068 "peer_address": { 00:22:06.068 "trtype": "TCP", 00:22:06.068 "adrfam": "IPv4", 00:22:06.068 "traddr": "10.0.0.1", 00:22:06.068 "trsvcid": "58730" 00:22:06.068 }, 00:22:06.068 "auth": { 00:22:06.068 "state": "completed", 00:22:06.068 "digest": "sha512", 00:22:06.068 "dhgroup": "ffdhe2048" 00:22:06.068 } 00:22:06.068 } 00:22:06.068 ]' 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.068 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.068 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.068 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.069 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.329 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:06.329 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.270 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.271 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.271 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.271 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.531 00:22:07.531 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.531 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.531 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.791 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.791 { 00:22:07.791 "cntlid": 111, 00:22:07.791 "qid": 0, 00:22:07.791 "state": "enabled", 00:22:07.791 "thread": "nvmf_tgt_poll_group_000", 00:22:07.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:07.791 "listen_address": { 00:22:07.791 "trtype": "TCP", 00:22:07.791 "adrfam": "IPv4", 00:22:07.791 "traddr": "10.0.0.2", 00:22:07.791 "trsvcid": "4420" 00:22:07.791 }, 00:22:07.791 "peer_address": { 00:22:07.792 "trtype": "TCP", 00:22:07.792 "adrfam": "IPv4", 00:22:07.792 "traddr": "10.0.0.1", 00:22:07.792 "trsvcid": "37886" 00:22:07.792 }, 00:22:07.792 "auth": { 00:22:07.792 "state": "completed", 00:22:07.792 "digest": "sha512", 00:22:07.792 "dhgroup": "ffdhe2048" 00:22:07.792 } 00:22:07.792 } 00:22:07.792 ]' 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.792 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.051 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:08.051 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.992 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.993 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.993 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.993 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.993 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.993 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.253 00:22:09.253 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.253 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.253 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.513 { 00:22:09.513 "cntlid": 113, 00:22:09.513 "qid": 0, 00:22:09.513 "state": "enabled", 00:22:09.513 "thread": "nvmf_tgt_poll_group_000", 00:22:09.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:09.513 "listen_address": { 00:22:09.513 "trtype": "TCP", 00:22:09.513 "adrfam": "IPv4", 00:22:09.513 "traddr": "10.0.0.2", 00:22:09.513 "trsvcid": "4420" 00:22:09.513 }, 00:22:09.513 "peer_address": { 00:22:09.513 "trtype": "TCP", 00:22:09.513 "adrfam": "IPv4", 00:22:09.513 "traddr": "10.0.0.1", 00:22:09.513 "trsvcid": "37912" 00:22:09.513 }, 00:22:09.513 "auth": { 00:22:09.513 "state": "completed", 00:22:09.513 "digest": "sha512", 00:22:09.513 "dhgroup": "ffdhe3072" 00:22:09.513 } 00:22:09.513 } 00:22:09.513 ]' 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.513 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.773 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:09.773 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.712 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.972 00:22:10.972 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.973 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.973 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.232 { 00:22:11.232 "cntlid": 115, 00:22:11.232 "qid": 0, 00:22:11.232 "state": "enabled", 00:22:11.232 "thread": "nvmf_tgt_poll_group_000", 00:22:11.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:11.232 "listen_address": { 00:22:11.232 "trtype": "TCP", 00:22:11.232 "adrfam": "IPv4", 00:22:11.232 "traddr": "10.0.0.2", 00:22:11.232 "trsvcid": "4420" 00:22:11.232 }, 00:22:11.232 "peer_address": { 00:22:11.232 "trtype": "TCP", 00:22:11.232 "adrfam": "IPv4", 00:22:11.232 "traddr": "10.0.0.1", 00:22:11.232 "trsvcid": "37952" 00:22:11.232 }, 00:22:11.232 "auth": { 00:22:11.232 "state": "completed", 00:22:11.232 "digest": "sha512", 00:22:11.232 "dhgroup": "ffdhe3072" 00:22:11.232 } 00:22:11.232 } 00:22:11.232 ]' 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.232 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.492 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:11.493 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.433 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.434 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.694 00:22:12.694 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.694 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.694 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.954 { 00:22:12.954 "cntlid": 117, 00:22:12.954 "qid": 0, 00:22:12.954 "state": "enabled", 00:22:12.954 "thread": "nvmf_tgt_poll_group_000", 00:22:12.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:12.954 "listen_address": { 00:22:12.954 "trtype": "TCP", 00:22:12.954 "adrfam": "IPv4", 00:22:12.954 "traddr": "10.0.0.2", 00:22:12.954 "trsvcid": "4420" 00:22:12.954 }, 00:22:12.954 "peer_address": { 00:22:12.954 "trtype": "TCP", 00:22:12.954 "adrfam": "IPv4", 00:22:12.954 "traddr": "10.0.0.1", 00:22:12.954 "trsvcid": "37990" 00:22:12.954 }, 00:22:12.954 "auth": { 00:22:12.954 "state": "completed", 00:22:12.954 "digest": "sha512", 00:22:12.954 "dhgroup": "ffdhe3072" 00:22:12.954 } 00:22:12.954 } 00:22:12.954 ]' 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.954 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.215 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:13.215 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.156 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.156 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.415 00:22:14.415 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.415 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.415 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.674 { 00:22:14.674 "cntlid": 119, 00:22:14.674 "qid": 0, 00:22:14.674 "state": "enabled", 00:22:14.674 "thread": "nvmf_tgt_poll_group_000", 00:22:14.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:14.674 "listen_address": { 00:22:14.674 "trtype": "TCP", 00:22:14.674 "adrfam": "IPv4", 00:22:14.674 "traddr": "10.0.0.2", 00:22:14.674 "trsvcid": "4420" 00:22:14.674 }, 00:22:14.674 "peer_address": { 00:22:14.674 "trtype": "TCP", 00:22:14.674 "adrfam": "IPv4", 00:22:14.674 "traddr": "10.0.0.1", 00:22:14.674 "trsvcid": "38032" 00:22:14.674 }, 00:22:14.674 "auth": { 00:22:14.674 "state": "completed", 00:22:14.674 "digest": "sha512", 00:22:14.674 "dhgroup": "ffdhe3072" 00:22:14.674 } 00:22:14.674 } 00:22:14.674 ]' 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.674 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.935 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.935 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.935 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:14.935 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.878 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.879 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.139 00:22:16.139 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.139 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.139 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.399 { 00:22:16.399 "cntlid": 121, 00:22:16.399 "qid": 0, 00:22:16.399 "state": "enabled", 00:22:16.399 "thread": "nvmf_tgt_poll_group_000", 00:22:16.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:16.399 "listen_address": { 00:22:16.399 "trtype": "TCP", 00:22:16.399 "adrfam": "IPv4", 00:22:16.399 "traddr": "10.0.0.2", 00:22:16.399 "trsvcid": "4420" 00:22:16.399 }, 00:22:16.399 "peer_address": { 00:22:16.399 "trtype": "TCP", 00:22:16.399 "adrfam": "IPv4", 00:22:16.399 "traddr": "10.0.0.1", 00:22:16.399 "trsvcid": "56662" 00:22:16.399 }, 00:22:16.399 "auth": { 00:22:16.399 "state": "completed", 00:22:16.399 "digest": "sha512", 00:22:16.399 "dhgroup": "ffdhe4096" 00:22:16.399 } 00:22:16.399 } 00:22:16.399 ]' 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.399 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:16.659 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.601 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.861 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.861 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.861 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.861 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.121 00:22:18.121 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.121 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.121 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.121 { 00:22:18.121 "cntlid": 123, 00:22:18.121 "qid": 0, 00:22:18.121 "state": "enabled", 00:22:18.121 "thread": "nvmf_tgt_poll_group_000", 00:22:18.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:18.121 "listen_address": { 00:22:18.121 "trtype": "TCP", 00:22:18.121 "adrfam": "IPv4", 00:22:18.121 "traddr": "10.0.0.2", 00:22:18.121 "trsvcid": "4420" 00:22:18.121 }, 00:22:18.121 "peer_address": { 00:22:18.121 "trtype": "TCP", 00:22:18.121 "adrfam": "IPv4", 00:22:18.121 "traddr": "10.0.0.1", 00:22:18.121 "trsvcid": "56694" 00:22:18.121 }, 00:22:18.121 "auth": { 00:22:18.121 "state": "completed", 00:22:18.121 "digest": "sha512", 00:22:18.121 "dhgroup": "ffdhe4096" 00:22:18.121 } 00:22:18.121 } 00:22:18.121 ]' 00:22:18.121 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.379 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.638 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:18.638 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.207 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.467 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.727 00:22:19.727 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.727 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.727 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.987 { 00:22:19.987 "cntlid": 125, 00:22:19.987 "qid": 0, 00:22:19.987 "state": "enabled", 00:22:19.987 "thread": "nvmf_tgt_poll_group_000", 00:22:19.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:19.987 "listen_address": { 00:22:19.987 "trtype": "TCP", 00:22:19.987 "adrfam": "IPv4", 00:22:19.987 "traddr": "10.0.0.2", 00:22:19.987 "trsvcid": "4420" 00:22:19.987 }, 00:22:19.987 "peer_address": { 00:22:19.987 "trtype": "TCP", 00:22:19.987 "adrfam": "IPv4", 00:22:19.987 "traddr": "10.0.0.1", 00:22:19.987 "trsvcid": "56720" 00:22:19.987 }, 00:22:19.987 "auth": { 00:22:19.987 "state": "completed", 00:22:19.987 "digest": "sha512", 00:22:19.987 "dhgroup": "ffdhe4096" 00:22:19.987 } 00:22:19.987 } 00:22:19.987 ]' 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.987 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.247 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:20.247 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.186 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.186 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.446 00:22:21.446 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.446 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.446 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.706 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.706 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.706 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.706 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.706 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.707 { 00:22:21.707 "cntlid": 127, 00:22:21.707 "qid": 0, 00:22:21.707 "state": "enabled", 00:22:21.707 "thread": "nvmf_tgt_poll_group_000", 00:22:21.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:21.707 "listen_address": { 00:22:21.707 "trtype": "TCP", 00:22:21.707 "adrfam": "IPv4", 00:22:21.707 "traddr": "10.0.0.2", 00:22:21.707 "trsvcid": "4420" 00:22:21.707 }, 00:22:21.707 "peer_address": { 00:22:21.707 "trtype": "TCP", 00:22:21.707 "adrfam": "IPv4", 00:22:21.707 "traddr": "10.0.0.1", 00:22:21.707 "trsvcid": "56750" 00:22:21.707 }, 00:22:21.707 "auth": { 00:22:21.707 "state": "completed", 00:22:21.707 "digest": "sha512", 00:22:21.707 "dhgroup": "ffdhe4096" 00:22:21.707 } 00:22:21.707 } 00:22:21.707 ]' 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.707 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.967 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:21.967 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.977 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.276 00:22:23.276 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.276 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.276 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.548 { 00:22:23.548 "cntlid": 129, 00:22:23.548 "qid": 0, 00:22:23.548 "state": "enabled", 00:22:23.548 "thread": "nvmf_tgt_poll_group_000", 00:22:23.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:23.548 "listen_address": { 00:22:23.548 "trtype": "TCP", 00:22:23.548 "adrfam": "IPv4", 00:22:23.548 "traddr": "10.0.0.2", 00:22:23.548 "trsvcid": "4420" 00:22:23.548 }, 00:22:23.548 "peer_address": { 00:22:23.548 "trtype": "TCP", 00:22:23.548 "adrfam": "IPv4", 00:22:23.548 "traddr": "10.0.0.1", 00:22:23.548 "trsvcid": "56768" 00:22:23.548 }, 00:22:23.548 "auth": { 00:22:23.548 "state": "completed", 00:22:23.548 "digest": "sha512", 00:22:23.548 "dhgroup": "ffdhe6144" 00:22:23.548 } 00:22:23.548 } 00:22:23.548 ]' 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.548 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.809 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:23.809 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.750 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.751 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.011 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.272 { 00:22:25.272 "cntlid": 131, 00:22:25.272 "qid": 0, 00:22:25.272 "state": "enabled", 00:22:25.272 "thread": "nvmf_tgt_poll_group_000", 00:22:25.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:25.272 "listen_address": { 00:22:25.272 "trtype": "TCP", 00:22:25.272 "adrfam": "IPv4", 00:22:25.272 "traddr": "10.0.0.2", 00:22:25.272 "trsvcid": "4420" 00:22:25.272 }, 00:22:25.272 "peer_address": { 00:22:25.272 "trtype": "TCP", 00:22:25.272 "adrfam": "IPv4", 00:22:25.272 "traddr": "10.0.0.1", 00:22:25.272 "trsvcid": "56796" 00:22:25.272 }, 00:22:25.272 "auth": { 00:22:25.272 "state": "completed", 00:22:25.272 "digest": "sha512", 00:22:25.272 "dhgroup": "ffdhe6144" 00:22:25.272 } 00:22:25.272 } 00:22:25.272 ]' 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.272 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:25.532 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.472 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:27.042 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.042 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.042 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.042 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.042 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.042 { 00:22:27.042 "cntlid": 133, 00:22:27.042 "qid": 0, 00:22:27.042 "state": "enabled", 00:22:27.042 "thread": "nvmf_tgt_poll_group_000", 00:22:27.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:27.042 "listen_address": { 00:22:27.042 "trtype": "TCP", 00:22:27.042 "adrfam": "IPv4", 00:22:27.042 "traddr": "10.0.0.2", 00:22:27.042 "trsvcid": "4420" 00:22:27.042 }, 00:22:27.042 "peer_address": { 00:22:27.042 "trtype": "TCP", 00:22:27.042 "adrfam": "IPv4", 00:22:27.042 "traddr": "10.0.0.1", 00:22:27.042 "trsvcid": "59422" 00:22:27.042 }, 00:22:27.042 "auth": { 00:22:27.042 "state": "completed", 00:22:27.042 "digest": "sha512", 00:22:27.042 "dhgroup": "ffdhe6144" 00:22:27.042 } 00:22:27.042 } 00:22:27.042 ]' 00:22:27.042 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.303 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.563 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:27.563 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.134 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.394 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.654 00:22:28.654 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.654 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.654 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.915 { 00:22:28.915 "cntlid": 135, 00:22:28.915 "qid": 0, 00:22:28.915 "state": "enabled", 00:22:28.915 "thread": "nvmf_tgt_poll_group_000", 00:22:28.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:28.915 "listen_address": { 00:22:28.915 "trtype": "TCP", 00:22:28.915 "adrfam": "IPv4", 00:22:28.915 "traddr": "10.0.0.2", 00:22:28.915 "trsvcid": "4420" 00:22:28.915 }, 00:22:28.915 "peer_address": { 00:22:28.915 "trtype": "TCP", 00:22:28.915 "adrfam": "IPv4", 00:22:28.915 "traddr": "10.0.0.1", 00:22:28.915 "trsvcid": "59454" 00:22:28.915 }, 00:22:28.915 "auth": { 00:22:28.915 "state": "completed", 00:22:28.915 "digest": "sha512", 00:22:28.915 "dhgroup": "ffdhe6144" 00:22:28.915 } 00:22:28.915 } 00:22:28.915 ]' 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.915 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.176 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:29.176 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.176 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.176 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.176 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.176 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:29.176 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.117 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.117 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.379 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.379 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.379 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.379 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.640 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.901 { 00:22:30.901 "cntlid": 137, 00:22:30.901 "qid": 0, 00:22:30.901 "state": "enabled", 00:22:30.901 "thread": "nvmf_tgt_poll_group_000", 00:22:30.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:30.901 "listen_address": { 00:22:30.901 "trtype": "TCP", 00:22:30.901 "adrfam": "IPv4", 00:22:30.901 "traddr": "10.0.0.2", 00:22:30.901 "trsvcid": "4420" 00:22:30.901 }, 00:22:30.901 "peer_address": { 00:22:30.901 "trtype": "TCP", 00:22:30.901 "adrfam": "IPv4", 00:22:30.901 "traddr": "10.0.0.1", 00:22:30.901 "trsvcid": "59494" 00:22:30.901 }, 00:22:30.901 "auth": { 00:22:30.901 "state": "completed", 00:22:30.901 "digest": "sha512", 00:22:30.901 "dhgroup": "ffdhe8192" 00:22:30.901 } 00:22:30.901 } 00:22:30.901 ]' 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.901 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.165 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.165 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.165 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.165 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.165 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.165 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:31.165 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.108 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.369 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.631 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.892 { 00:22:32.892 "cntlid": 139, 00:22:32.892 "qid": 0, 00:22:32.892 "state": "enabled", 00:22:32.892 "thread": "nvmf_tgt_poll_group_000", 00:22:32.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:32.892 "listen_address": { 00:22:32.892 "trtype": "TCP", 00:22:32.892 "adrfam": "IPv4", 00:22:32.892 "traddr": "10.0.0.2", 00:22:32.892 "trsvcid": "4420" 00:22:32.892 }, 00:22:32.892 "peer_address": { 00:22:32.892 "trtype": "TCP", 00:22:32.892 "adrfam": "IPv4", 00:22:32.892 "traddr": "10.0.0.1", 00:22:32.892 "trsvcid": "59514" 00:22:32.892 }, 00:22:32.892 "auth": { 00:22:32.892 "state": "completed", 00:22:32.892 "digest": "sha512", 00:22:32.892 "dhgroup": "ffdhe8192" 00:22:32.892 } 00:22:32.892 } 00:22:32.892 ]' 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.892 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.153 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.153 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.153 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.153 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.153 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.413 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:33.413 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: --dhchap-ctrl-secret DHHC-1:02:NTcxNmE2YzBkNGRkYjI0OTU0MGQxYjM2NDk2MjA5Njg1YTIwMjc4Mzg5YjVlNDc0PfSEjQ==: 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.985 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.253 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.825 00:22:34.825 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.825 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.825 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.086 { 00:22:35.086 "cntlid": 141, 00:22:35.086 "qid": 0, 00:22:35.086 "state": "enabled", 00:22:35.086 "thread": "nvmf_tgt_poll_group_000", 00:22:35.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:35.086 "listen_address": { 00:22:35.086 "trtype": "TCP", 00:22:35.086 "adrfam": "IPv4", 00:22:35.086 "traddr": "10.0.0.2", 00:22:35.086 "trsvcid": "4420" 00:22:35.086 }, 00:22:35.086 "peer_address": { 00:22:35.086 "trtype": "TCP", 00:22:35.086 "adrfam": "IPv4", 00:22:35.086 "traddr": "10.0.0.1", 00:22:35.086 "trsvcid": "59542" 00:22:35.086 }, 00:22:35.086 "auth": { 00:22:35.086 "state": "completed", 00:22:35.086 "digest": "sha512", 00:22:35.086 "dhgroup": "ffdhe8192" 00:22:35.086 } 00:22:35.086 } 00:22:35.086 ]' 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.086 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.086 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.086 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.086 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.346 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:35.346 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:01:OWUzNjViMmQxMTdlOTI4N2UxODhmMDU2Mjc1NTg5NzLR97E+: 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.286 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.286 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.857 00:22:36.857 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.857 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.857 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.116 { 00:22:37.116 "cntlid": 143, 00:22:37.116 "qid": 0, 00:22:37.116 "state": "enabled", 00:22:37.116 "thread": "nvmf_tgt_poll_group_000", 00:22:37.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:37.116 "listen_address": { 00:22:37.116 "trtype": "TCP", 00:22:37.116 "adrfam": "IPv4", 00:22:37.116 "traddr": "10.0.0.2", 00:22:37.116 "trsvcid": "4420" 00:22:37.116 }, 00:22:37.116 "peer_address": { 00:22:37.116 "trtype": "TCP", 00:22:37.116 "adrfam": "IPv4", 00:22:37.116 "traddr": "10.0.0.1", 00:22:37.116 "trsvcid": "42174" 00:22:37.116 }, 00:22:37.116 "auth": { 00:22:37.116 "state": "completed", 00:22:37.116 "digest": "sha512", 00:22:37.116 "dhgroup": "ffdhe8192" 00:22:37.116 } 00:22:37.116 } 00:22:37.116 ]' 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:37.116 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.116 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.116 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.116 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.376 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:37.376 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:37.946 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.207 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.207 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.778 00:22:38.778 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.779 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.779 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.040 { 00:22:39.040 "cntlid": 145, 00:22:39.040 "qid": 0, 00:22:39.040 "state": "enabled", 00:22:39.040 "thread": "nvmf_tgt_poll_group_000", 00:22:39.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:39.040 "listen_address": { 00:22:39.040 "trtype": "TCP", 00:22:39.040 "adrfam": "IPv4", 00:22:39.040 "traddr": "10.0.0.2", 00:22:39.040 "trsvcid": "4420" 00:22:39.040 }, 00:22:39.040 "peer_address": { 00:22:39.040 "trtype": "TCP", 00:22:39.040 "adrfam": "IPv4", 00:22:39.040 "traddr": "10.0.0.1", 00:22:39.040 "trsvcid": "42198" 00:22:39.040 }, 00:22:39.040 "auth": { 00:22:39.040 "state": "completed", 00:22:39.040 "digest": "sha512", 00:22:39.040 "dhgroup": "ffdhe8192" 00:22:39.040 } 00:22:39.040 } 00:22:39.040 ]' 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:39.040 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.040 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.040 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.040 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.300 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:39.300 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:YTZhMDQzNTRmNzA5NDkxMzMzODY4ZmJiOWJjMWUxY2JlZDAyY2Q1ZWQ4MTRiYTQ1ZIPZQQ==: --dhchap-ctrl-secret DHHC-1:03:NjIxM2RmMDg5NjY1ZDYyNTY0M2Y1MThlNjQwYWQ4OTk5ODE1NGFjNDAyMmE0ZmM4MjZhY2UyODdjNjhkNzBkYYSweH0=: 00:22:39.872 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:40.132 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:40.133 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:40.393 request: 00:22:40.393 { 00:22:40.393 "name": "nvme0", 00:22:40.393 "trtype": "tcp", 00:22:40.393 "traddr": "10.0.0.2", 00:22:40.393 "adrfam": "ipv4", 00:22:40.393 "trsvcid": "4420", 00:22:40.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.393 "prchk_reftag": false, 00:22:40.393 "prchk_guard": false, 00:22:40.393 "hdgst": false, 00:22:40.393 "ddgst": false, 00:22:40.393 "dhchap_key": "key2", 00:22:40.393 "allow_unrecognized_csi": false, 00:22:40.393 "method": "bdev_nvme_attach_controller", 00:22:40.393 "req_id": 1 00:22:40.393 } 00:22:40.393 Got JSON-RPC error response 00:22:40.393 response: 00:22:40.393 { 00:22:40.393 "code": -5, 00:22:40.393 "message": "Input/output error" 00:22:40.393 } 00:22:40.393 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:40.393 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.653 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.914 request: 00:22:40.914 { 00:22:40.914 "name": "nvme0", 00:22:40.914 "trtype": "tcp", 00:22:40.914 "traddr": "10.0.0.2", 00:22:40.914 "adrfam": "ipv4", 00:22:40.914 "trsvcid": "4420", 00:22:40.914 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:40.914 "prchk_reftag": false, 00:22:40.914 "prchk_guard": false, 00:22:40.914 "hdgst": false, 00:22:40.914 "ddgst": false, 00:22:40.914 "dhchap_key": "key1", 00:22:40.914 "dhchap_ctrlr_key": "ckey2", 00:22:40.914 "allow_unrecognized_csi": false, 00:22:40.914 "method": "bdev_nvme_attach_controller", 00:22:40.914 "req_id": 1 00:22:40.914 } 00:22:40.914 Got JSON-RPC error response 00:22:40.914 response: 00:22:40.914 { 00:22:40.914 "code": -5, 00:22:40.914 "message": "Input/output error" 00:22:40.914 } 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.176 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.437 request: 00:22:41.437 { 00:22:41.437 "name": "nvme0", 00:22:41.437 "trtype": "tcp", 00:22:41.437 "traddr": "10.0.0.2", 00:22:41.437 "adrfam": "ipv4", 00:22:41.437 "trsvcid": "4420", 00:22:41.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:41.437 "prchk_reftag": false, 00:22:41.437 "prchk_guard": false, 00:22:41.437 "hdgst": false, 00:22:41.437 "ddgst": false, 00:22:41.437 "dhchap_key": "key1", 00:22:41.437 "dhchap_ctrlr_key": "ckey1", 00:22:41.437 "allow_unrecognized_csi": false, 00:22:41.437 "method": "bdev_nvme_attach_controller", 00:22:41.437 "req_id": 1 00:22:41.437 } 00:22:41.437 Got JSON-RPC error response 00:22:41.437 response: 00:22:41.437 { 00:22:41.437 "code": -5, 00:22:41.437 "message": "Input/output error" 00:22:41.437 } 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1854258 ']' 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1854258' 00:22:41.698 killing process with pid 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1854258 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1881649 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1881649 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1881649 ']' 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.698 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1881649 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1881649 ']' 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:42.639 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 null0 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.G6N 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.QiW ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QiW 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.crj 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.xsq ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xsq 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.JV9 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.38e ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.38e 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IUF 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.900 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.843 nvme0n1 00:22:43.843 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.843 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.843 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.103 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.103 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.104 { 00:22:44.104 "cntlid": 1, 00:22:44.104 "qid": 0, 00:22:44.104 "state": "enabled", 00:22:44.104 "thread": "nvmf_tgt_poll_group_000", 00:22:44.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.104 "listen_address": { 00:22:44.104 "trtype": "TCP", 00:22:44.104 "adrfam": "IPv4", 00:22:44.104 "traddr": "10.0.0.2", 00:22:44.104 "trsvcid": "4420" 00:22:44.104 }, 00:22:44.104 "peer_address": { 00:22:44.104 "trtype": "TCP", 00:22:44.104 "adrfam": "IPv4", 00:22:44.104 "traddr": "10.0.0.1", 00:22:44.104 "trsvcid": "42246" 00:22:44.104 }, 00:22:44.104 "auth": { 00:22:44.104 "state": "completed", 00:22:44.104 "digest": "sha512", 00:22:44.104 "dhgroup": "ffdhe8192" 00:22:44.104 } 00:22:44.104 } 00:22:44.104 ]' 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.104 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.104 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.104 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.104 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.104 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.104 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.365 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:44.365 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:45.308 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.308 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.569 request: 00:22:45.569 { 00:22:45.569 "name": "nvme0", 00:22:45.569 "trtype": "tcp", 00:22:45.569 "traddr": "10.0.0.2", 00:22:45.569 "adrfam": "ipv4", 00:22:45.569 "trsvcid": "4420", 00:22:45.569 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:45.569 "prchk_reftag": false, 00:22:45.569 "prchk_guard": false, 00:22:45.569 "hdgst": false, 00:22:45.569 "ddgst": false, 00:22:45.569 "dhchap_key": "key3", 00:22:45.569 "allow_unrecognized_csi": false, 00:22:45.569 "method": "bdev_nvme_attach_controller", 00:22:45.569 "req_id": 1 00:22:45.569 } 00:22:45.569 Got JSON-RPC error response 00:22:45.569 response: 00:22:45.569 { 00:22:45.569 "code": -5, 00:22:45.569 "message": "Input/output error" 00:22:45.569 } 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:45.569 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.830 request: 00:22:45.830 { 00:22:45.830 "name": "nvme0", 00:22:45.830 "trtype": "tcp", 00:22:45.830 "traddr": "10.0.0.2", 00:22:45.830 "adrfam": "ipv4", 00:22:45.830 "trsvcid": "4420", 00:22:45.830 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:45.830 "prchk_reftag": false, 00:22:45.830 "prchk_guard": false, 00:22:45.830 "hdgst": false, 00:22:45.830 "ddgst": false, 00:22:45.830 "dhchap_key": "key3", 00:22:45.830 "allow_unrecognized_csi": false, 00:22:45.830 "method": "bdev_nvme_attach_controller", 00:22:45.830 "req_id": 1 00:22:45.830 } 00:22:45.830 Got JSON-RPC error response 00:22:45.830 response: 00:22:45.830 { 00:22:45.830 "code": -5, 00:22:45.830 "message": "Input/output error" 00:22:45.830 } 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.830 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.090 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.091 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:46.351 request: 00:22:46.351 { 00:22:46.351 "name": "nvme0", 00:22:46.351 "trtype": "tcp", 00:22:46.351 "traddr": "10.0.0.2", 00:22:46.351 "adrfam": "ipv4", 00:22:46.351 "trsvcid": "4420", 00:22:46.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.351 "prchk_reftag": false, 00:22:46.351 "prchk_guard": false, 00:22:46.351 "hdgst": false, 00:22:46.351 "ddgst": false, 00:22:46.351 "dhchap_key": "key0", 00:22:46.351 "dhchap_ctrlr_key": "key1", 00:22:46.351 "allow_unrecognized_csi": false, 00:22:46.351 "method": "bdev_nvme_attach_controller", 00:22:46.351 "req_id": 1 00:22:46.351 } 00:22:46.351 Got JSON-RPC error response 00:22:46.351 response: 00:22:46.351 { 00:22:46.351 "code": -5, 00:22:46.351 "message": "Input/output error" 00:22:46.351 } 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:46.351 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:46.612 nvme0n1 00:22:46.612 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:46.612 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:46.612 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.874 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.874 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.874 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:47.134 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:48.075 nvme0n1 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:48.075 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.336 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.336 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:48.336 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: --dhchap-ctrl-secret DHHC-1:03:MTdhZGExZWQ1ZjRiZDNkN2NjNTIwYWJjNGZmNThkYmNmYmU3M2NkOTRjMjAxZTA5NTI3ZjE2YmMwNmUwYmVkYYD9mts=: 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.908 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:49.170 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:49.742 request: 00:22:49.742 { 00:22:49.742 "name": "nvme0", 00:22:49.742 "trtype": "tcp", 00:22:49.742 "traddr": "10.0.0.2", 00:22:49.742 "adrfam": "ipv4", 00:22:49.742 "trsvcid": "4420", 00:22:49.742 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:49.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:49.742 "prchk_reftag": false, 00:22:49.742 "prchk_guard": false, 00:22:49.742 "hdgst": false, 00:22:49.742 "ddgst": false, 00:22:49.742 "dhchap_key": "key1", 00:22:49.742 "allow_unrecognized_csi": false, 00:22:49.742 "method": "bdev_nvme_attach_controller", 00:22:49.742 "req_id": 1 00:22:49.742 } 00:22:49.742 Got JSON-RPC error response 00:22:49.742 response: 00:22:49.742 { 00:22:49.742 "code": -5, 00:22:49.742 "message": "Input/output error" 00:22:49.742 } 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:49.742 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.684 nvme0n1 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.684 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:50.945 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:51.205 nvme0n1 00:22:51.205 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:51.205 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:51.205 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.205 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.205 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.205 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: '' 2s 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: ]] 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjRiMzhiNDJmMzE5Nzk4OTE3MTViYTljZDA1ZGVhNjLzz9/s: 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:51.466 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:53.377 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:53.377 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:53.377 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:53.377 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:53.637 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:53.637 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:53.637 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: 2s 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: ]] 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2JiNGE2ZTk1NGVhOTQ5ZjI4ZDRiZWI4NmNiNTQ0MGFhNTYyMWQ4YmU3NTBlZDJkyzgxOQ==: 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:53.638 11:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:55.548 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:56.487 nvme0n1 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.487 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.058 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:57.058 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:57.058 11:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:57.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:57.578 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:58.150 request: 00:22:58.150 { 00:22:58.150 "name": "nvme0", 00:22:58.150 "dhchap_key": "key1", 00:22:58.150 "dhchap_ctrlr_key": "key3", 00:22:58.150 "method": "bdev_nvme_set_keys", 00:22:58.150 "req_id": 1 00:22:58.150 } 00:22:58.150 Got JSON-RPC error response 00:22:58.150 response: 00:22:58.150 { 00:22:58.150 "code": -13, 00:22:58.150 "message": "Permission denied" 00:22:58.150 } 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:58.150 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.150 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:58.150 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:59.532 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.533 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:00.474 nvme0n1 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.474 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.735 request: 00:23:00.735 { 00:23:00.735 "name": "nvme0", 00:23:00.735 "dhchap_key": "key2", 00:23:00.735 "dhchap_ctrlr_key": "key0", 00:23:00.735 "method": "bdev_nvme_set_keys", 00:23:00.735 "req_id": 1 00:23:00.735 } 00:23:00.735 Got JSON-RPC error response 00:23:00.735 response: 00:23:00.735 { 00:23:00.735 "code": -13, 00:23:00.735 "message": "Permission denied" 00:23:00.735 } 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:00.735 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.996 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:00.996 11:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:01.935 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:01.935 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:01.935 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1854513 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1854513 ']' 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1854513 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:02.196 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1854513 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1854513' 00:23:02.197 killing process with pid 1854513 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1854513 00:23:02.197 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1854513 00:23:02.457 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.458 rmmod nvme_tcp 00:23:02.458 rmmod nvme_fabrics 00:23:02.458 rmmod nvme_keyring 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1881649 ']' 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1881649 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1881649 ']' 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1881649 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1881649 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1881649' 00:23:02.458 killing process with pid 1881649 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1881649 00:23:02.458 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1881649 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.720 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.641 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.641 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.G6N /tmp/spdk.key-sha256.crj /tmp/spdk.key-sha384.JV9 /tmp/spdk.key-sha512.IUF /tmp/spdk.key-sha512.QiW /tmp/spdk.key-sha384.xsq /tmp/spdk.key-sha256.38e '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:04.981 00:23:04.981 real 2m45.252s 00:23:04.981 user 6m8.209s 00:23:04.981 sys 0m24.272s 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.981 ************************************ 00:23:04.981 END TEST nvmf_auth_target 00:23:04.981 ************************************ 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.981 11:03:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.981 ************************************ 00:23:04.982 START TEST nvmf_bdevio_no_huge 00:23:04.982 ************************************ 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:04.982 * Looking for test storage... 00:23:04.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.982 --rc genhtml_branch_coverage=1 00:23:04.982 --rc genhtml_function_coverage=1 00:23:04.982 --rc genhtml_legend=1 00:23:04.982 --rc geninfo_all_blocks=1 00:23:04.982 --rc geninfo_unexecuted_blocks=1 00:23:04.982 00:23:04.982 ' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.982 --rc genhtml_branch_coverage=1 00:23:04.982 --rc genhtml_function_coverage=1 00:23:04.982 --rc genhtml_legend=1 00:23:04.982 --rc geninfo_all_blocks=1 00:23:04.982 --rc geninfo_unexecuted_blocks=1 00:23:04.982 00:23:04.982 ' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.982 --rc genhtml_branch_coverage=1 00:23:04.982 --rc genhtml_function_coverage=1 00:23:04.982 --rc genhtml_legend=1 00:23:04.982 --rc geninfo_all_blocks=1 00:23:04.982 --rc geninfo_unexecuted_blocks=1 00:23:04.982 00:23:04.982 ' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:04.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.982 --rc genhtml_branch_coverage=1 00:23:04.982 --rc genhtml_function_coverage=1 00:23:04.982 --rc genhtml_legend=1 00:23:04.982 --rc geninfo_all_blocks=1 00:23:04.982 --rc geninfo_unexecuted_blocks=1 00:23:04.982 00:23:04.982 ' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.982 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.316 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:11.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:11.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:11.897 Found net devices under 0000:31:00.0: cvl_0_0 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:11.897 Found net devices under 0000:31:00.1: cvl_0_1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.897 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:23:12.157 00:23:12.157 --- 10.0.0.2 ping statistics --- 00:23:12.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.157 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:23:12.157 00:23:12.157 --- 10.0.0.1 ping statistics --- 00:23:12.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.157 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1890329 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1890329 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1890329 ']' 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.157 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.158 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.158 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.158 [2024-10-09 11:03:32.035881] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:12.158 [2024-10-09 11:03:32.035949] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:12.418 [2024-10-09 11:03:32.191270] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:12.418 [2024-10-09 11:03:32.228546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.418 [2024-10-09 11:03:32.272230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.418 [2024-10-09 11:03:32.272268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.418 [2024-10-09 11:03:32.272277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.418 [2024-10-09 11:03:32.272284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.418 [2024-10-09 11:03:32.272290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.418 [2024-10-09 11:03:32.273742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:12.418 [2024-10-09 11:03:32.273873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:12.418 [2024-10-09 11:03:32.274029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.418 [2024-10-09 11:03:32.274030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 [2024-10-09 11:03:32.918081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 Malloc0 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.989 [2024-10-09 11:03:32.972159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:12.989 { 00:23:12.989 "params": { 00:23:12.989 "name": "Nvme$subsystem", 00:23:12.989 "trtype": "$TEST_TRANSPORT", 00:23:12.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.989 "adrfam": "ipv4", 00:23:12.989 "trsvcid": "$NVMF_PORT", 00:23:12.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.989 "hdgst": ${hdgst:-false}, 00:23:12.989 "ddgst": ${ddgst:-false} 00:23:12.989 }, 00:23:12.989 "method": "bdev_nvme_attach_controller" 00:23:12.989 } 00:23:12.989 EOF 00:23:12.989 )") 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:12.989 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:13.250 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:13.250 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:13.250 "params": { 00:23:13.250 "name": "Nvme1", 00:23:13.250 "trtype": "tcp", 00:23:13.250 "traddr": "10.0.0.2", 00:23:13.250 "adrfam": "ipv4", 00:23:13.250 "trsvcid": "4420", 00:23:13.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.250 "hdgst": false, 00:23:13.250 "ddgst": false 00:23:13.251 }, 00:23:13.251 "method": "bdev_nvme_attach_controller" 00:23:13.251 }' 00:23:13.251 [2024-10-09 11:03:33.029851] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:13.251 [2024-10-09 11:03:33.029923] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1890593 ] 00:23:13.251 [2024-10-09 11:03:33.176592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:13.251 [2024-10-09 11:03:33.200051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.251 [2024-10-09 11:03:33.241975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.251 [2024-10-09 11:03:33.242096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.251 [2024-10-09 11:03:33.242099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.822 I/O targets: 00:23:13.822 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:13.822 00:23:13.822 00:23:13.822 CUnit - A unit testing framework for C - Version 2.1-3 00:23:13.822 http://cunit.sourceforge.net/ 00:23:13.822 00:23:13.822 00:23:13.822 Suite: bdevio tests on: Nvme1n1 00:23:13.822 Test: blockdev write read block ...passed 00:23:13.822 Test: blockdev write zeroes read block ...passed 00:23:13.822 Test: blockdev write zeroes read no split ...passed 00:23:13.822 Test: blockdev write zeroes read split ...passed 00:23:13.822 Test: blockdev write zeroes read split partial ...passed 00:23:13.822 Test: blockdev reset ...[2024-10-09 11:03:33.747903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.822 [2024-10-09 11:03:33.747972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e84250 (9): Bad file descriptor 00:23:13.822 [2024-10-09 11:03:33.759881] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.822 passed 00:23:13.822 Test: blockdev write read 8 blocks ...passed 00:23:13.822 Test: blockdev write read size > 128k ...passed 00:23:13.822 Test: blockdev write read invalid size ...passed 00:23:13.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:13.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:13.822 Test: blockdev write read max offset ...passed 00:23:14.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:14.083 Test: blockdev writev readv 8 blocks ...passed 00:23:14.083 Test: blockdev writev readv 30 x 1block ...passed 00:23:14.083 Test: blockdev writev readv block ...passed 00:23:14.083 Test: blockdev writev readv size > 128k ...passed 00:23:14.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:14.084 Test: blockdev comparev and writev ...[2024-10-09 11:03:33.938231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.938261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.938272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.938278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.938683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.938692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.938702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.938708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.939036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.939044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.939054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.939059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.939407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.939416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:33.939428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.084 [2024-10-09 11:03:33.939433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:14.084 passed 00:23:14.084 Test: blockdev nvme passthru rw ...passed 00:23:14.084 Test: blockdev nvme passthru vendor specific ...[2024-10-09 11:03:34.023993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.084 [2024-10-09 11:03:34.024006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:34.024233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.084 [2024-10-09 11:03:34.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:34.024337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.084 [2024-10-09 11:03:34.024344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:14.084 [2024-10-09 11:03:34.024437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.084 [2024-10-09 11:03:34.024445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:14.084 passed 00:23:14.084 Test: blockdev nvme admin passthru ...passed 00:23:14.084 Test: blockdev copy ...passed 00:23:14.084 00:23:14.084 Run Summary: Type Total Ran Passed Failed Inactive 00:23:14.084 suites 1 1 n/a 0 0 00:23:14.084 tests 23 23 23 0 0 00:23:14.084 asserts 152 152 152 0 n/a 00:23:14.084 00:23:14.084 Elapsed time = 1.027 seconds 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.344 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.606 rmmod nvme_tcp 00:23:14.606 rmmod nvme_fabrics 00:23:14.606 rmmod nvme_keyring 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1890329 ']' 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1890329 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1890329 ']' 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1890329 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1890329 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1890329' 00:23:14.606 killing process with pid 1890329 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1890329 00:23:14.606 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1890329 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.868 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.414 00:23:17.414 real 0m12.207s 00:23:17.414 user 0m14.165s 00:23:17.414 sys 0m6.409s 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:17.414 ************************************ 00:23:17.414 END TEST nvmf_bdevio_no_huge 00:23:17.414 ************************************ 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.414 11:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.414 ************************************ 00:23:17.414 START TEST nvmf_tls 00:23:17.414 ************************************ 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:17.414 * Looking for test storage... 00:23:17.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:17.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.414 --rc genhtml_branch_coverage=1 00:23:17.414 --rc genhtml_function_coverage=1 00:23:17.414 --rc genhtml_legend=1 00:23:17.414 --rc geninfo_all_blocks=1 00:23:17.414 --rc geninfo_unexecuted_blocks=1 00:23:17.414 00:23:17.414 ' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:17.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.414 --rc genhtml_branch_coverage=1 00:23:17.414 --rc genhtml_function_coverage=1 00:23:17.414 --rc genhtml_legend=1 00:23:17.414 --rc geninfo_all_blocks=1 00:23:17.414 --rc geninfo_unexecuted_blocks=1 00:23:17.414 00:23:17.414 ' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:17.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.414 --rc genhtml_branch_coverage=1 00:23:17.414 --rc genhtml_function_coverage=1 00:23:17.414 --rc genhtml_legend=1 00:23:17.414 --rc geninfo_all_blocks=1 00:23:17.414 --rc geninfo_unexecuted_blocks=1 00:23:17.414 00:23:17.414 ' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:17.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.414 --rc genhtml_branch_coverage=1 00:23:17.414 --rc genhtml_function_coverage=1 00:23:17.414 --rc genhtml_legend=1 00:23:17.414 --rc geninfo_all_blocks=1 00:23:17.414 --rc geninfo_unexecuted_blocks=1 00:23:17.414 00:23:17.414 ' 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.414 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.415 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.557 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:25.558 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:25.558 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:25.558 Found net devices under 0000:31:00.0: cvl_0_0 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:25.558 Found net devices under 0000:31:00.1: cvl_0_1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:23:25.558 00:23:25.558 --- 10.0.0.2 ping statistics --- 00:23:25.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.558 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:23:25.558 00:23:25.558 --- 10.0.0.1 ping statistics --- 00:23:25.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.558 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1895092 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1895092 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1895092 ']' 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.558 11:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.558 [2024-10-09 11:03:44.850252] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:25.558 [2024-10-09 11:03:44.850320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.558 [2024-10-09 11:03:44.995404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:25.558 [2024-10-09 11:03:45.044530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.558 [2024-10-09 11:03:45.070730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.558 [2024-10-09 11:03:45.070769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.559 [2024-10-09 11:03:45.070778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.559 [2024-10-09 11:03:45.070785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.559 [2024-10-09 11:03:45.070791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.559 [2024-10-09 11:03:45.071589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:25.820 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:26.082 true 00:23:26.082 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.082 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:26.082 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:26.082 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:26.082 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:26.350 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.350 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:26.611 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:26.611 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:26.611 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:26.871 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:27.132 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:27.132 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:27.132 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:27.393 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:27.393 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:27.393 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:27.393 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:27.393 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:27.654 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:27.654 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.g8EWQ5vxcm 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Z6EraXrKbV 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.g8EWQ5vxcm 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Z6EraXrKbV 00:23:27.915 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:28.176 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:28.436 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.g8EWQ5vxcm 00:23:28.436 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.g8EWQ5vxcm 00:23:28.436 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:28.697 [2024-10-09 11:03:48.454200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.697 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:28.697 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:28.957 [2024-10-09 11:03:48.810193] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.957 [2024-10-09 11:03:48.810386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.957 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:29.217 malloc0 00:23:29.217 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:29.217 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.g8EWQ5vxcm 00:23:29.476 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.735 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.g8EWQ5vxcm 00:23:39.727 Initializing NVMe Controllers 00:23:39.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.727 Initialization complete. Launching workers. 00:23:39.727 ======================================================== 00:23:39.727 Latency(us) 00:23:39.727 Device Information : IOPS MiB/s Average min max 00:23:39.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18425.26 71.97 3473.52 1004.75 4212.17 00:23:39.727 ======================================================== 00:23:39.727 Total : 18425.26 71.97 3473.52 1004.75 4212.17 00:23:39.727 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.g8EWQ5vxcm 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.g8EWQ5vxcm 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1898113 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1898113 /var/tmp/bdevperf.sock 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1898113 ']' 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.727 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.727 [2024-10-09 11:03:59.698984] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:39.727 [2024-10-09 11:03:59.699042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898113 ] 00:23:39.988 [2024-10-09 11:03:59.829011] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:39.988 [2024-10-09 11:03:59.850683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.988 [2024-10-09 11:03:59.866736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.558 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.558 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:40.558 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.g8EWQ5vxcm 00:23:40.819 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:40.819 [2024-10-09 11:04:00.805745] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.079 TLSTESTn1 00:23:41.079 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.079 Running I/O for 10 seconds... 00:23:43.405 5745.00 IOPS, 22.44 MiB/s [2024-10-09T09:04:04.347Z] 5132.50 IOPS, 20.05 MiB/s [2024-10-09T09:04:05.286Z] 5064.33 IOPS, 19.78 MiB/s [2024-10-09T09:04:06.225Z] 5233.25 IOPS, 20.44 MiB/s [2024-10-09T09:04:07.234Z] 5249.20 IOPS, 20.50 MiB/s [2024-10-09T09:04:08.175Z] 5220.00 IOPS, 20.39 MiB/s [2024-10-09T09:04:09.115Z] 5317.43 IOPS, 20.77 MiB/s [2024-10-09T09:04:10.055Z] 5383.50 IOPS, 21.03 MiB/s [2024-10-09T09:04:10.994Z] 5433.00 IOPS, 21.22 MiB/s [2024-10-09T09:04:10.994Z] 5445.20 IOPS, 21.27 MiB/s 00:23:50.992 Latency(us) 00:23:50.992 [2024-10-09T09:04:10.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.992 Verification LBA range: start 0x0 length 0x2000 00:23:50.992 TLSTESTn1 : 10.01 5451.95 21.30 0.00 0.00 23441.47 4652.99 37004.96 00:23:50.992 [2024-10-09T09:04:10.994Z] =================================================================================================================== 00:23:50.992 [2024-10-09T09:04:10.994Z] Total : 5451.95 21.30 0.00 0.00 23441.47 4652.99 37004.96 00:23:50.992 { 00:23:50.992 "results": [ 00:23:50.992 { 00:23:50.993 "job": "TLSTESTn1", 00:23:50.993 "core_mask": "0x4", 00:23:50.993 "workload": "verify", 00:23:50.993 "status": "finished", 00:23:50.993 "verify_range": { 00:23:50.993 "start": 0, 00:23:50.993 "length": 8192 00:23:50.993 }, 00:23:50.993 "queue_depth": 128, 00:23:50.993 "io_size": 4096, 00:23:50.993 "runtime": 10.010906, 00:23:50.993 "iops": 5451.954098859784, 00:23:50.993 "mibps": 21.29669569867103, 00:23:50.993 "io_failed": 0, 00:23:50.993 "io_timeout": 0, 00:23:50.993 "avg_latency_us": 23441.468386751705, 00:23:50.993 "min_latency_us": 4652.990310725025, 00:23:50.993 "max_latency_us": 37004.95823588373 00:23:50.993 } 00:23:50.993 ], 00:23:50.993 "core_count": 1 00:23:50.993 } 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1898113 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1898113 ']' 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1898113 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1898113 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1898113' 00:23:51.253 killing process with pid 1898113 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1898113 00:23:51.253 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.253 00:23:51.253 Latency(us) 00:23:51.253 [2024-10-09T09:04:11.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.253 [2024-10-09T09:04:11.255Z] =================================================================================================================== 00:23:51.253 [2024-10-09T09:04:11.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1898113 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z6EraXrKbV 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z6EraXrKbV 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z6EraXrKbV 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Z6EraXrKbV 00:23:51.253 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1900174 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1900174 /var/tmp/bdevperf.sock 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1900174 ']' 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.254 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.254 [2024-10-09 11:04:11.238042] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:51.254 [2024-10-09 11:04:11.238098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900174 ] 00:23:51.514 [2024-10-09 11:04:11.368212] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:51.514 [2024-10-09 11:04:11.390817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.514 [2024-10-09 11:04:11.406145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.084 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.084 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.084 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z6EraXrKbV 00:23:52.344 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.605 [2024-10-09 11:04:12.361275] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.605 [2024-10-09 11:04:12.365916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:52.605 [2024-10-09 11:04:12.366530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2033950 (107): Transport endpoint is not connected 00:23:52.605 [2024-10-09 11:04:12.367523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2033950 (9): Bad file descriptor 00:23:52.605 [2024-10-09 11:04:12.368522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:52.605 [2024-10-09 11:04:12.368530] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:52.605 [2024-10-09 11:04:12.368537] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:52.605 [2024-10-09 11:04:12.368545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:52.605 request: 00:23:52.605 { 00:23:52.605 "name": "TLSTEST", 00:23:52.605 "trtype": "tcp", 00:23:52.605 "traddr": "10.0.0.2", 00:23:52.605 "adrfam": "ipv4", 00:23:52.605 "trsvcid": "4420", 00:23:52.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.605 "prchk_reftag": false, 00:23:52.605 "prchk_guard": false, 00:23:52.605 "hdgst": false, 00:23:52.605 "ddgst": false, 00:23:52.605 "psk": "key0", 00:23:52.605 "allow_unrecognized_csi": false, 00:23:52.605 "method": "bdev_nvme_attach_controller", 00:23:52.605 "req_id": 1 00:23:52.605 } 00:23:52.605 Got JSON-RPC error response 00:23:52.605 response: 00:23:52.605 { 00:23:52.605 "code": -5, 00:23:52.605 "message": "Input/output error" 00:23:52.605 } 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1900174 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1900174 ']' 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1900174 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900174 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900174' 00:23:52.605 killing process with pid 1900174 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1900174 00:23:52.605 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.605 00:23:52.605 Latency(us) 00:23:52.605 [2024-10-09T09:04:12.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.605 [2024-10-09T09:04:12.607Z] =================================================================================================================== 00:23:52.605 [2024-10-09T09:04:12.607Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1900174 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.g8EWQ5vxcm 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.g8EWQ5vxcm 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.g8EWQ5vxcm 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.g8EWQ5vxcm 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1900518 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1900518 /var/tmp/bdevperf.sock 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1900518 ']' 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.605 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.866 [2024-10-09 11:04:12.612619] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:52.866 [2024-10-09 11:04:12.612672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900518 ] 00:23:52.866 [2024-10-09 11:04:12.742495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:52.866 [2024-10-09 11:04:12.766823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.866 [2024-10-09 11:04:12.780961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.438 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.438 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.438 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.g8EWQ5vxcm 00:23:53.698 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:53.959 [2024-10-09 11:04:13.744079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.959 [2024-10-09 11:04:13.753626] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:53.959 [2024-10-09 11:04:13.753646] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:53.959 [2024-10-09 11:04:13.753665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:53.959 [2024-10-09 11:04:13.754225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4f950 (107): Transport endpoint is not connected 00:23:53.959 [2024-10-09 11:04:13.755218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4f950 (9): Bad file descriptor 00:23:53.959 [2024-10-09 11:04:13.756217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.959 [2024-10-09 11:04:13.756228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:53.959 [2024-10-09 11:04:13.756234] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:53.959 [2024-10-09 11:04:13.756242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.959 request: 00:23:53.959 { 00:23:53.959 "name": "TLSTEST", 00:23:53.959 "trtype": "tcp", 00:23:53.959 "traddr": "10.0.0.2", 00:23:53.959 "adrfam": "ipv4", 00:23:53.959 "trsvcid": "4420", 00:23:53.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.959 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.959 "prchk_reftag": false, 00:23:53.959 "prchk_guard": false, 00:23:53.959 "hdgst": false, 00:23:53.959 "ddgst": false, 00:23:53.959 "psk": "key0", 00:23:53.959 "allow_unrecognized_csi": false, 00:23:53.959 "method": "bdev_nvme_attach_controller", 00:23:53.959 "req_id": 1 00:23:53.959 } 00:23:53.959 Got JSON-RPC error response 00:23:53.959 response: 00:23:53.959 { 00:23:53.959 "code": -5, 00:23:53.959 "message": "Input/output error" 00:23:53.959 } 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1900518 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1900518 ']' 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1900518 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900518 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900518' 00:23:53.959 killing process with pid 1900518 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1900518 00:23:53.959 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.959 00:23:53.959 Latency(us) 00:23:53.959 [2024-10-09T09:04:13.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.959 [2024-10-09T09:04:13.961Z] =================================================================================================================== 00:23:53.959 [2024-10-09T09:04:13.961Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1900518 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.g8EWQ5vxcm 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.g8EWQ5vxcm 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.g8EWQ5vxcm 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.g8EWQ5vxcm 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1900856 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1900856 /var/tmp/bdevperf.sock 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1900856 ']' 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.959 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.279 [2024-10-09 11:04:13.996267] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:54.279 [2024-10-09 11:04:13.996320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900856 ] 00:23:54.279 [2024-10-09 11:04:14.126774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:54.279 [2024-10-09 11:04:14.149660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.279 [2024-10-09 11:04:14.164176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.850 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.850 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:54.850 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.g8EWQ5vxcm 00:23:55.111 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.373 [2024-10-09 11:04:15.139250] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.373 [2024-10-09 11:04:15.143777] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.373 [2024-10-09 11:04:15.143794] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.373 [2024-10-09 11:04:15.143813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.373 [2024-10-09 11:04:15.144428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ef950 (107): Transport endpoint is not connected 00:23:55.373 [2024-10-09 11:04:15.145420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ef950 (9): Bad file descriptor 00:23:55.373 [2024-10-09 11:04:15.146419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:55.373 [2024-10-09 11:04:15.146428] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:55.373 [2024-10-09 11:04:15.146434] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:55.373 [2024-10-09 11:04:15.146442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:55.373 request: 00:23:55.373 { 00:23:55.373 "name": "TLSTEST", 00:23:55.373 "trtype": "tcp", 00:23:55.373 "traddr": "10.0.0.2", 00:23:55.373 "adrfam": "ipv4", 00:23:55.373 "trsvcid": "4420", 00:23:55.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.373 "prchk_reftag": false, 00:23:55.373 "prchk_guard": false, 00:23:55.373 "hdgst": false, 00:23:55.373 "ddgst": false, 00:23:55.373 "psk": "key0", 00:23:55.373 "allow_unrecognized_csi": false, 00:23:55.373 "method": "bdev_nvme_attach_controller", 00:23:55.373 "req_id": 1 00:23:55.373 } 00:23:55.373 Got JSON-RPC error response 00:23:55.373 response: 00:23:55.373 { 00:23:55.373 "code": -5, 00:23:55.373 "message": "Input/output error" 00:23:55.373 } 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1900856 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1900856 ']' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1900856 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1900856 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1900856' 00:23:55.373 killing process with pid 1900856 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1900856 00:23:55.373 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.373 00:23:55.373 Latency(us) 00:23:55.373 [2024-10-09T09:04:15.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.373 [2024-10-09T09:04:15.375Z] =================================================================================================================== 00:23:55.373 [2024-10-09T09:04:15.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1900856 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1901177 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1901177 /var/tmp/bdevperf.sock 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1901177 ']' 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.373 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.633 [2024-10-09 11:04:15.396602] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:55.633 [2024-10-09 11:04:15.396669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901177 ] 00:23:55.633 [2024-10-09 11:04:15.528157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:55.633 [2024-10-09 11:04:15.550750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.634 [2024-10-09 11:04:15.565664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.204 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.204 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:56.204 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:56.464 [2024-10-09 11:04:16.340648] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:56.464 [2024-10-09 11:04:16.340675] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:56.464 request: 00:23:56.464 { 00:23:56.464 "name": "key0", 00:23:56.464 "path": "", 00:23:56.464 "method": "keyring_file_add_key", 00:23:56.464 "req_id": 1 00:23:56.464 } 00:23:56.464 Got JSON-RPC error response 00:23:56.464 response: 00:23:56.464 { 00:23:56.464 "code": -1, 00:23:56.464 "message": "Operation not permitted" 00:23:56.464 } 00:23:56.464 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.725 [2024-10-09 11:04:16.516751] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.725 [2024-10-09 11:04:16.516777] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:56.725 request: 00:23:56.725 { 00:23:56.725 "name": "TLSTEST", 00:23:56.725 "trtype": "tcp", 00:23:56.725 "traddr": "10.0.0.2", 00:23:56.725 "adrfam": "ipv4", 00:23:56.725 "trsvcid": "4420", 00:23:56.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.725 "prchk_reftag": false, 00:23:56.725 "prchk_guard": false, 00:23:56.725 "hdgst": false, 00:23:56.725 "ddgst": false, 00:23:56.725 "psk": "key0", 00:23:56.725 "allow_unrecognized_csi": false, 00:23:56.725 "method": "bdev_nvme_attach_controller", 00:23:56.725 "req_id": 1 00:23:56.725 } 00:23:56.725 Got JSON-RPC error response 00:23:56.725 response: 00:23:56.725 { 00:23:56.725 "code": -126, 00:23:56.725 "message": "Required key not available" 00:23:56.725 } 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1901177 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1901177 ']' 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1901177 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901177 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901177' 00:23:56.725 killing process with pid 1901177 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1901177 00:23:56.725 Received shutdown signal, test time was about 10.000000 seconds 00:23:56.725 00:23:56.725 Latency(us) 00:23:56.725 [2024-10-09T09:04:16.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.725 [2024-10-09T09:04:16.727Z] =================================================================================================================== 00:23:56.725 [2024-10-09T09:04:16.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1901177 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1895092 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1895092 ']' 00:23:56.725 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1895092 00:23:56.726 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:56.726 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.726 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1895092 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1895092' 00:23:56.987 killing process with pid 1895092 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1895092 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1895092 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.6zbeuGWMSQ 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.6zbeuGWMSQ 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1901469 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1901469 00:23:56.987 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1901469 ']' 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.988 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.988 [2024-10-09 11:04:16.984856] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:56.988 [2024-10-09 11:04:16.984922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.248 [2024-10-09 11:04:17.123799] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:57.248 [2024-10-09 11:04:17.170097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.248 [2024-10-09 11:04:17.185804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.248 [2024-10-09 11:04:17.185833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.248 [2024-10-09 11:04:17.185839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.248 [2024-10-09 11:04:17.185843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.248 [2024-10-09 11:04:17.185850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.248 [2024-10-09 11:04:17.186361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.818 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.818 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:57.818 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:57.818 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.818 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.079 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.079 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:23:58.079 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6zbeuGWMSQ 00:23:58.079 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:58.079 [2024-10-09 11:04:17.975158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.079 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:58.339 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:58.339 [2024-10-09 11:04:18.299205] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.339 [2024-10-09 11:04:18.299387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.339 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:58.599 malloc0 00:23:58.599 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:58.860 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:23:58.860 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6zbeuGWMSQ 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6zbeuGWMSQ 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1901918 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1901918 /var/tmp/bdevperf.sock 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1901918 ']' 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.121 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.121 [2024-10-09 11:04:18.985268] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:23:59.121 [2024-10-09 11:04:18.985336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901918 ] 00:23:59.121 [2024-10-09 11:04:19.116740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:59.382 [2024-10-09 11:04:19.139428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.382 [2024-10-09 11:04:19.155637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.953 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.953 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:59.953 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:00.213 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:00.213 [2024-10-09 11:04:20.138866] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.213 TLSTESTn1 00:24:00.473 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.473 Running I/O for 10 seconds... 00:24:02.355 5974.00 IOPS, 23.34 MiB/s [2024-10-09T09:04:23.739Z] 6076.50 IOPS, 23.74 MiB/s [2024-10-09T09:04:24.680Z] 5643.00 IOPS, 22.04 MiB/s [2024-10-09T09:04:25.619Z] 5507.75 IOPS, 21.51 MiB/s [2024-10-09T09:04:26.556Z] 5690.20 IOPS, 22.23 MiB/s [2024-10-09T09:04:27.496Z] 5775.83 IOPS, 22.56 MiB/s [2024-10-09T09:04:28.436Z] 5797.86 IOPS, 22.65 MiB/s [2024-10-09T09:04:29.379Z] 5666.38 IOPS, 22.13 MiB/s [2024-10-09T09:04:30.502Z] 5712.11 IOPS, 22.31 MiB/s [2024-10-09T09:04:30.502Z] 5755.60 IOPS, 22.48 MiB/s 00:24:10.500 Latency(us) 00:24:10.500 [2024-10-09T09:04:30.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.500 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.500 Verification LBA range: start 0x0 length 0x2000 00:24:10.500 TLSTESTn1 : 10.05 5739.80 22.42 0.00 0.00 22233.71 5474.11 74885.77 00:24:10.500 [2024-10-09T09:04:30.502Z] =================================================================================================================== 00:24:10.500 [2024-10-09T09:04:30.502Z] Total : 5739.80 22.42 0.00 0.00 22233.71 5474.11 74885.77 00:24:10.500 { 00:24:10.500 "results": [ 00:24:10.500 { 00:24:10.500 "job": "TLSTESTn1", 00:24:10.500 "core_mask": "0x4", 00:24:10.500 "workload": "verify", 00:24:10.500 "status": "finished", 00:24:10.500 "verify_range": { 00:24:10.500 "start": 0, 00:24:10.500 "length": 8192 00:24:10.500 }, 00:24:10.500 "queue_depth": 128, 00:24:10.500 "io_size": 4096, 00:24:10.500 "runtime": 10.049825, 00:24:10.500 "iops": 5739.801439328546, 00:24:10.500 "mibps": 22.42109937237713, 00:24:10.500 "io_failed": 0, 00:24:10.500 "io_timeout": 0, 00:24:10.500 "avg_latency_us": 22233.710419659605, 00:24:10.500 "min_latency_us": 5474.1062479117945, 00:24:10.500 "max_latency_us": 74885.77347143335 00:24:10.500 } 00:24:10.500 ], 00:24:10.500 "core_count": 1 00:24:10.500 } 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1901918 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1901918 ']' 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1901918 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.500 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901918 00:24:10.501 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:10.501 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:10.501 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901918' 00:24:10.501 killing process with pid 1901918 00:24:10.501 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1901918 00:24:10.501 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.501 00:24:10.501 Latency(us) 00:24:10.501 [2024-10-09T09:04:30.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.501 [2024-10-09T09:04:30.503Z] =================================================================================================================== 00:24:10.501 [2024-10-09T09:04:30.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.501 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1901918 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.6zbeuGWMSQ 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6zbeuGWMSQ 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6zbeuGWMSQ 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6zbeuGWMSQ 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6zbeuGWMSQ 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1904039 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1904039 /var/tmp/bdevperf.sock 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1904039 ']' 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.762 11:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.762 [2024-10-09 11:04:30.609958] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:10.762 [2024-10-09 11:04:30.610015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904039 ] 00:24:10.762 [2024-10-09 11:04:30.739749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:10.762 [2024-10-09 11:04:30.761468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.023 [2024-10-09 11:04:30.777518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.594 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.594 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:11.594 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:11.594 [2024-10-09 11:04:31.572763] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6zbeuGWMSQ': 0100666 00:24:11.594 [2024-10-09 11:04:31.572783] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:11.594 request: 00:24:11.594 { 00:24:11.594 "name": "key0", 00:24:11.594 "path": "/tmp/tmp.6zbeuGWMSQ", 00:24:11.594 "method": "keyring_file_add_key", 00:24:11.594 "req_id": 1 00:24:11.594 } 00:24:11.594 Got JSON-RPC error response 00:24:11.594 response: 00:24:11.594 { 00:24:11.594 "code": -1, 00:24:11.594 "message": "Operation not permitted" 00:24:11.594 } 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.854 [2024-10-09 11:04:31.740855] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.854 [2024-10-09 11:04:31.740875] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:11.854 request: 00:24:11.854 { 00:24:11.854 "name": "TLSTEST", 00:24:11.854 "trtype": "tcp", 00:24:11.854 "traddr": "10.0.0.2", 00:24:11.854 "adrfam": "ipv4", 00:24:11.854 "trsvcid": "4420", 00:24:11.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.854 "prchk_reftag": false, 00:24:11.854 "prchk_guard": false, 00:24:11.854 "hdgst": false, 00:24:11.854 "ddgst": false, 00:24:11.854 "psk": "key0", 00:24:11.854 "allow_unrecognized_csi": false, 00:24:11.854 "method": "bdev_nvme_attach_controller", 00:24:11.854 "req_id": 1 00:24:11.854 } 00:24:11.854 Got JSON-RPC error response 00:24:11.854 response: 00:24:11.854 { 00:24:11.854 "code": -126, 00:24:11.854 "message": "Required key not available" 00:24:11.854 } 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1904039 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1904039 ']' 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1904039 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904039 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904039' 00:24:11.854 killing process with pid 1904039 00:24:11.854 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1904039 00:24:11.854 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.854 00:24:11.854 Latency(us) 00:24:11.854 [2024-10-09T09:04:31.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.854 [2024-10-09T09:04:31.856Z] =================================================================================================================== 00:24:11.854 [2024-10-09T09:04:31.857Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:11.855 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1904039 00:24:12.114 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1901469 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1901469 ']' 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1901469 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901469 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901469' 00:24:12.115 killing process with pid 1901469 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1901469 00:24:12.115 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1901469 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1904307 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1904307 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1904307 ']' 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.115 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.376 [2024-10-09 11:04:32.152432] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:12.376 [2024-10-09 11:04:32.152493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.376 [2024-10-09 11:04:32.290501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:12.376 [2024-10-09 11:04:32.340030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.376 [2024-10-09 11:04:32.361707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.376 [2024-10-09 11:04:32.361747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.376 [2024-10-09 11:04:32.361753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.376 [2024-10-09 11:04:32.361758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.376 [2024-10-09 11:04:32.361763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.376 [2024-10-09 11:04:32.362371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.945 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.945 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:12.945 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:12.945 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.945 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6zbeuGWMSQ 00:24:13.205 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.205 [2024-10-09 11:04:33.134623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.205 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:13.465 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:13.465 [2024-10-09 11:04:33.458641] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.465 [2024-10-09 11:04:33.458835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.725 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:13.725 malloc0 00:24:13.725 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:13.985 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:13.985 [2024-10-09 11:04:33.952643] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6zbeuGWMSQ': 0100666 00:24:13.985 [2024-10-09 11:04:33.952668] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:13.985 request: 00:24:13.985 { 00:24:13.985 "name": "key0", 00:24:13.985 "path": "/tmp/tmp.6zbeuGWMSQ", 00:24:13.985 "method": "keyring_file_add_key", 00:24:13.985 "req_id": 1 00:24:13.985 } 00:24:13.985 Got JSON-RPC error response 00:24:13.985 response: 00:24:13.985 { 00:24:13.985 "code": -1, 00:24:13.985 "message": "Operation not permitted" 00:24:13.985 } 00:24:13.985 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:14.245 [2024-10-09 11:04:34.104679] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:14.245 [2024-10-09 11:04:34.104705] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:14.245 request: 00:24:14.245 { 00:24:14.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.245 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.245 "psk": "key0", 00:24:14.245 "method": "nvmf_subsystem_add_host", 00:24:14.245 "req_id": 1 00:24:14.245 } 00:24:14.245 Got JSON-RPC error response 00:24:14.245 response: 00:24:14.245 { 00:24:14.245 "code": -32603, 00:24:14.245 "message": "Internal error" 00:24:14.245 } 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1904307 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1904307 ']' 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1904307 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904307 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904307' 00:24:14.245 killing process with pid 1904307 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1904307 00:24:14.245 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1904307 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.6zbeuGWMSQ 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1904864 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1904864 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1904864 ']' 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.506 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.506 [2024-10-09 11:04:34.356188] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:14.506 [2024-10-09 11:04:34.356247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.506 [2024-10-09 11:04:34.493596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:14.765 [2024-10-09 11:04:34.540963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.765 [2024-10-09 11:04:34.562927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.765 [2024-10-09 11:04:34.562968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.765 [2024-10-09 11:04:34.562974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.765 [2024-10-09 11:04:34.562980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.765 [2024-10-09 11:04:34.562985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.765 [2024-10-09 11:04:34.563597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6zbeuGWMSQ 00:24:15.335 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:15.595 [2024-10-09 11:04:35.358950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.595 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:15.595 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:15.854 [2024-10-09 11:04:35.682983] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.854 [2024-10-09 11:04:35.683156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.854 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:15.854 malloc0 00:24:16.113 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:16.113 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1905344 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1905344 /var/tmp/bdevperf.sock 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1905344 ']' 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.373 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.632 [2024-10-09 11:04:36.385423] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:16.632 [2024-10-09 11:04:36.385487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905344 ] 00:24:16.632 [2024-10-09 11:04:36.514946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:16.632 [2024-10-09 11:04:36.537379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.632 [2024-10-09 11:04:36.553399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.201 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.201 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:17.201 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:17.461 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.721 [2024-10-09 11:04:37.524340] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.721 TLSTESTn1 00:24:17.721 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:17.982 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:17.982 "subsystems": [ 00:24:17.982 { 00:24:17.982 "subsystem": "keyring", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "keyring_file_add_key", 00:24:17.982 "params": { 00:24:17.982 "name": "key0", 00:24:17.982 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:17.982 } 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "iobuf", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "iobuf_set_options", 00:24:17.982 "params": { 00:24:17.982 "small_pool_count": 8192, 00:24:17.982 "large_pool_count": 1024, 00:24:17.982 "small_bufsize": 8192, 00:24:17.982 "large_bufsize": 135168 00:24:17.982 } 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "sock", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "sock_set_default_impl", 00:24:17.982 "params": { 00:24:17.982 "impl_name": "posix" 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "sock_impl_set_options", 00:24:17.982 "params": { 00:24:17.982 "impl_name": "ssl", 00:24:17.982 "recv_buf_size": 4096, 00:24:17.982 "send_buf_size": 4096, 00:24:17.982 "enable_recv_pipe": true, 00:24:17.982 "enable_quickack": false, 00:24:17.982 "enable_placement_id": 0, 00:24:17.982 "enable_zerocopy_send_server": true, 00:24:17.982 "enable_zerocopy_send_client": false, 00:24:17.982 "zerocopy_threshold": 0, 00:24:17.982 "tls_version": 0, 00:24:17.982 "enable_ktls": false 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "sock_impl_set_options", 00:24:17.982 "params": { 00:24:17.982 "impl_name": "posix", 00:24:17.982 "recv_buf_size": 2097152, 00:24:17.982 "send_buf_size": 2097152, 00:24:17.982 "enable_recv_pipe": true, 00:24:17.982 "enable_quickack": false, 00:24:17.982 "enable_placement_id": 0, 00:24:17.982 "enable_zerocopy_send_server": true, 00:24:17.982 "enable_zerocopy_send_client": false, 00:24:17.982 "zerocopy_threshold": 0, 00:24:17.982 "tls_version": 0, 00:24:17.982 "enable_ktls": false 00:24:17.982 } 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "vmd", 00:24:17.982 "config": [] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "accel", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "accel_set_options", 00:24:17.982 "params": { 00:24:17.982 "small_cache_size": 128, 00:24:17.982 "large_cache_size": 16, 00:24:17.982 "task_count": 2048, 00:24:17.982 "sequence_count": 2048, 00:24:17.982 "buf_count": 2048 00:24:17.982 } 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "bdev", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "bdev_set_options", 00:24:17.982 "params": { 00:24:17.982 "bdev_io_pool_size": 65535, 00:24:17.982 "bdev_io_cache_size": 256, 00:24:17.982 "bdev_auto_examine": true, 00:24:17.982 "iobuf_small_cache_size": 128, 00:24:17.982 "iobuf_large_cache_size": 16 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_raid_set_options", 00:24:17.982 "params": { 00:24:17.982 "process_window_size_kb": 1024, 00:24:17.982 "process_max_bandwidth_mb_sec": 0 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_iscsi_set_options", 00:24:17.982 "params": { 00:24:17.982 "timeout_sec": 30 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_nvme_set_options", 00:24:17.982 "params": { 00:24:17.982 "action_on_timeout": "none", 00:24:17.982 "timeout_us": 0, 00:24:17.982 "timeout_admin_us": 0, 00:24:17.982 "keep_alive_timeout_ms": 10000, 00:24:17.982 "arbitration_burst": 0, 00:24:17.982 "low_priority_weight": 0, 00:24:17.982 "medium_priority_weight": 0, 00:24:17.982 "high_priority_weight": 0, 00:24:17.982 "nvme_adminq_poll_period_us": 10000, 00:24:17.982 "nvme_ioq_poll_period_us": 0, 00:24:17.982 "io_queue_requests": 0, 00:24:17.982 "delay_cmd_submit": true, 00:24:17.982 "transport_retry_count": 4, 00:24:17.982 "bdev_retry_count": 3, 00:24:17.982 "transport_ack_timeout": 0, 00:24:17.982 "ctrlr_loss_timeout_sec": 0, 00:24:17.982 "reconnect_delay_sec": 0, 00:24:17.982 "fast_io_fail_timeout_sec": 0, 00:24:17.982 "disable_auto_failback": false, 00:24:17.982 "generate_uuids": false, 00:24:17.982 "transport_tos": 0, 00:24:17.982 "nvme_error_stat": false, 00:24:17.982 "rdma_srq_size": 0, 00:24:17.982 "io_path_stat": false, 00:24:17.982 "allow_accel_sequence": false, 00:24:17.982 "rdma_max_cq_size": 0, 00:24:17.982 "rdma_cm_event_timeout_ms": 0, 00:24:17.982 "dhchap_digests": [ 00:24:17.982 "sha256", 00:24:17.982 "sha384", 00:24:17.982 "sha512" 00:24:17.982 ], 00:24:17.982 "dhchap_dhgroups": [ 00:24:17.982 "null", 00:24:17.982 "ffdhe2048", 00:24:17.982 "ffdhe3072", 00:24:17.982 "ffdhe4096", 00:24:17.982 "ffdhe6144", 00:24:17.982 "ffdhe8192" 00:24:17.982 ] 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_nvme_set_hotplug", 00:24:17.982 "params": { 00:24:17.982 "period_us": 100000, 00:24:17.982 "enable": false 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_malloc_create", 00:24:17.982 "params": { 00:24:17.982 "name": "malloc0", 00:24:17.982 "num_blocks": 8192, 00:24:17.982 "block_size": 4096, 00:24:17.982 "physical_block_size": 4096, 00:24:17.982 "uuid": "530eb30f-6fc3-4ffe-9eec-6b0771176bfe", 00:24:17.982 "optimal_io_boundary": 0, 00:24:17.982 "md_size": 0, 00:24:17.982 "dif_type": 0, 00:24:17.982 "dif_is_head_of_md": false, 00:24:17.982 "dif_pi_format": 0 00:24:17.982 } 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "method": "bdev_wait_for_examine" 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "nbd", 00:24:17.982 "config": [] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "scheduler", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "framework_set_scheduler", 00:24:17.982 "params": { 00:24:17.982 "name": "static" 00:24:17.982 } 00:24:17.982 } 00:24:17.982 ] 00:24:17.982 }, 00:24:17.982 { 00:24:17.982 "subsystem": "nvmf", 00:24:17.982 "config": [ 00:24:17.982 { 00:24:17.982 "method": "nvmf_set_config", 00:24:17.982 "params": { 00:24:17.982 "discovery_filter": "match_any", 00:24:17.982 "admin_cmd_passthru": { 00:24:17.982 "identify_ctrlr": false 00:24:17.983 }, 00:24:17.983 "dhchap_digests": [ 00:24:17.983 "sha256", 00:24:17.983 "sha384", 00:24:17.983 "sha512" 00:24:17.983 ], 00:24:17.983 "dhchap_dhgroups": [ 00:24:17.983 "null", 00:24:17.983 "ffdhe2048", 00:24:17.983 "ffdhe3072", 00:24:17.983 "ffdhe4096", 00:24:17.983 "ffdhe6144", 00:24:17.983 "ffdhe8192" 00:24:17.983 ] 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_set_max_subsystems", 00:24:17.983 "params": { 00:24:17.983 "max_subsystems": 1024 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_set_crdt", 00:24:17.983 "params": { 00:24:17.983 "crdt1": 0, 00:24:17.983 "crdt2": 0, 00:24:17.983 "crdt3": 0 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_create_transport", 00:24:17.983 "params": { 00:24:17.983 "trtype": "TCP", 00:24:17.983 "max_queue_depth": 128, 00:24:17.983 "max_io_qpairs_per_ctrlr": 127, 00:24:17.983 "in_capsule_data_size": 4096, 00:24:17.983 "max_io_size": 131072, 00:24:17.983 "io_unit_size": 131072, 00:24:17.983 "max_aq_depth": 128, 00:24:17.983 "num_shared_buffers": 511, 00:24:17.983 "buf_cache_size": 4294967295, 00:24:17.983 "dif_insert_or_strip": false, 00:24:17.983 "zcopy": false, 00:24:17.983 "c2h_success": false, 00:24:17.983 "sock_priority": 0, 00:24:17.983 "abort_timeout_sec": 1, 00:24:17.983 "ack_timeout": 0, 00:24:17.983 "data_wr_pool_size": 0 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_create_subsystem", 00:24:17.983 "params": { 00:24:17.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.983 "allow_any_host": false, 00:24:17.983 "serial_number": "SPDK00000000000001", 00:24:17.983 "model_number": "SPDK bdev Controller", 00:24:17.983 "max_namespaces": 10, 00:24:17.983 "min_cntlid": 1, 00:24:17.983 "max_cntlid": 65519, 00:24:17.983 "ana_reporting": false 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_subsystem_add_host", 00:24:17.983 "params": { 00:24:17.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.983 "host": "nqn.2016-06.io.spdk:host1", 00:24:17.983 "psk": "key0" 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_subsystem_add_ns", 00:24:17.983 "params": { 00:24:17.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.983 "namespace": { 00:24:17.983 "nsid": 1, 00:24:17.983 "bdev_name": "malloc0", 00:24:17.983 "nguid": "530EB30F6FC34FFE9EEC6B0771176BFE", 00:24:17.983 "uuid": "530eb30f-6fc3-4ffe-9eec-6b0771176bfe", 00:24:17.983 "no_auto_visible": false 00:24:17.983 } 00:24:17.983 } 00:24:17.983 }, 00:24:17.983 { 00:24:17.983 "method": "nvmf_subsystem_add_listener", 00:24:17.983 "params": { 00:24:17.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.983 "listen_address": { 00:24:17.983 "trtype": "TCP", 00:24:17.983 "adrfam": "IPv4", 00:24:17.983 "traddr": "10.0.0.2", 00:24:17.983 "trsvcid": "4420" 00:24:17.983 }, 00:24:17.983 "secure_channel": true 00:24:17.983 } 00:24:17.983 } 00:24:17.983 ] 00:24:17.983 } 00:24:17.983 ] 00:24:17.983 }' 00:24:17.983 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:18.244 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:18.244 "subsystems": [ 00:24:18.244 { 00:24:18.244 "subsystem": "keyring", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "keyring_file_add_key", 00:24:18.244 "params": { 00:24:18.244 "name": "key0", 00:24:18.244 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "iobuf", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "iobuf_set_options", 00:24:18.244 "params": { 00:24:18.244 "small_pool_count": 8192, 00:24:18.244 "large_pool_count": 1024, 00:24:18.244 "small_bufsize": 8192, 00:24:18.244 "large_bufsize": 135168 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "sock", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "sock_set_default_impl", 00:24:18.244 "params": { 00:24:18.244 "impl_name": "posix" 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "sock_impl_set_options", 00:24:18.244 "params": { 00:24:18.244 "impl_name": "ssl", 00:24:18.244 "recv_buf_size": 4096, 00:24:18.244 "send_buf_size": 4096, 00:24:18.244 "enable_recv_pipe": true, 00:24:18.244 "enable_quickack": false, 00:24:18.244 "enable_placement_id": 0, 00:24:18.244 "enable_zerocopy_send_server": true, 00:24:18.244 "enable_zerocopy_send_client": false, 00:24:18.244 "zerocopy_threshold": 0, 00:24:18.244 "tls_version": 0, 00:24:18.244 "enable_ktls": false 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "sock_impl_set_options", 00:24:18.244 "params": { 00:24:18.244 "impl_name": "posix", 00:24:18.244 "recv_buf_size": 2097152, 00:24:18.244 "send_buf_size": 2097152, 00:24:18.244 "enable_recv_pipe": true, 00:24:18.244 "enable_quickack": false, 00:24:18.244 "enable_placement_id": 0, 00:24:18.244 "enable_zerocopy_send_server": true, 00:24:18.244 "enable_zerocopy_send_client": false, 00:24:18.244 "zerocopy_threshold": 0, 00:24:18.244 "tls_version": 0, 00:24:18.244 "enable_ktls": false 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "vmd", 00:24:18.244 "config": [] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "accel", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "accel_set_options", 00:24:18.244 "params": { 00:24:18.244 "small_cache_size": 128, 00:24:18.244 "large_cache_size": 16, 00:24:18.244 "task_count": 2048, 00:24:18.244 "sequence_count": 2048, 00:24:18.244 "buf_count": 2048 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "bdev", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "bdev_set_options", 00:24:18.244 "params": { 00:24:18.245 "bdev_io_pool_size": 65535, 00:24:18.245 "bdev_io_cache_size": 256, 00:24:18.245 "bdev_auto_examine": true, 00:24:18.245 "iobuf_small_cache_size": 128, 00:24:18.245 "iobuf_large_cache_size": 16 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_raid_set_options", 00:24:18.245 "params": { 00:24:18.245 "process_window_size_kb": 1024, 00:24:18.245 "process_max_bandwidth_mb_sec": 0 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_iscsi_set_options", 00:24:18.245 "params": { 00:24:18.245 "timeout_sec": 30 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_nvme_set_options", 00:24:18.245 "params": { 00:24:18.245 "action_on_timeout": "none", 00:24:18.245 "timeout_us": 0, 00:24:18.245 "timeout_admin_us": 0, 00:24:18.245 "keep_alive_timeout_ms": 10000, 00:24:18.245 "arbitration_burst": 0, 00:24:18.245 "low_priority_weight": 0, 00:24:18.245 "medium_priority_weight": 0, 00:24:18.245 "high_priority_weight": 0, 00:24:18.245 "nvme_adminq_poll_period_us": 10000, 00:24:18.245 "nvme_ioq_poll_period_us": 0, 00:24:18.245 "io_queue_requests": 512, 00:24:18.245 "delay_cmd_submit": true, 00:24:18.245 "transport_retry_count": 4, 00:24:18.245 "bdev_retry_count": 3, 00:24:18.245 "transport_ack_timeout": 0, 00:24:18.245 "ctrlr_loss_timeout_sec": 0, 00:24:18.245 "reconnect_delay_sec": 0, 00:24:18.245 "fast_io_fail_timeout_sec": 0, 00:24:18.245 "disable_auto_failback": false, 00:24:18.245 "generate_uuids": false, 00:24:18.245 "transport_tos": 0, 00:24:18.245 "nvme_error_stat": false, 00:24:18.245 "rdma_srq_size": 0, 00:24:18.245 "io_path_stat": false, 00:24:18.245 "allow_accel_sequence": false, 00:24:18.245 "rdma_max_cq_size": 0, 00:24:18.245 "rdma_cm_event_timeout_ms": 0, 00:24:18.245 "dhchap_digests": [ 00:24:18.245 "sha256", 00:24:18.245 "sha384", 00:24:18.245 "sha512" 00:24:18.245 ], 00:24:18.245 "dhchap_dhgroups": [ 00:24:18.245 "null", 00:24:18.245 "ffdhe2048", 00:24:18.245 "ffdhe3072", 00:24:18.245 "ffdhe4096", 00:24:18.245 "ffdhe6144", 00:24:18.245 "ffdhe8192" 00:24:18.245 ] 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_nvme_attach_controller", 00:24:18.245 "params": { 00:24:18.245 "name": "TLSTEST", 00:24:18.245 "trtype": "TCP", 00:24:18.245 "adrfam": "IPv4", 00:24:18.245 "traddr": "10.0.0.2", 00:24:18.245 "trsvcid": "4420", 00:24:18.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.245 "prchk_reftag": false, 00:24:18.245 "prchk_guard": false, 00:24:18.245 "ctrlr_loss_timeout_sec": 0, 00:24:18.245 "reconnect_delay_sec": 0, 00:24:18.245 "fast_io_fail_timeout_sec": 0, 00:24:18.245 "psk": "key0", 00:24:18.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.245 "hdgst": false, 00:24:18.245 "ddgst": false, 00:24:18.245 "multipath": "multipath" 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_nvme_set_hotplug", 00:24:18.245 "params": { 00:24:18.245 "period_us": 100000, 00:24:18.245 "enable": false 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_wait_for_examine" 00:24:18.245 } 00:24:18.245 ] 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "subsystem": "nbd", 00:24:18.245 "config": [] 00:24:18.245 } 00:24:18.245 ] 00:24:18.245 }' 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1905344 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1905344 ']' 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1905344 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1905344 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1905344' 00:24:18.245 killing process with pid 1905344 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1905344 00:24:18.245 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.245 00:24:18.245 Latency(us) 00:24:18.245 [2024-10-09T09:04:38.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.245 [2024-10-09T09:04:38.247Z] =================================================================================================================== 00:24:18.245 [2024-10-09T09:04:38.247Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.245 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1905344 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1904864 ']' 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904864' 00:24:18.507 killing process with pid 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1904864 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.507 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:18.507 "subsystems": [ 00:24:18.507 { 00:24:18.507 "subsystem": "keyring", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "keyring_file_add_key", 00:24:18.507 "params": { 00:24:18.507 "name": "key0", 00:24:18.507 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:18.507 } 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "iobuf", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "iobuf_set_options", 00:24:18.507 "params": { 00:24:18.507 "small_pool_count": 8192, 00:24:18.507 "large_pool_count": 1024, 00:24:18.507 "small_bufsize": 8192, 00:24:18.507 "large_bufsize": 135168 00:24:18.507 } 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "sock", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "sock_set_default_impl", 00:24:18.507 "params": { 00:24:18.507 "impl_name": "posix" 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "sock_impl_set_options", 00:24:18.507 "params": { 00:24:18.507 "impl_name": "ssl", 00:24:18.507 "recv_buf_size": 4096, 00:24:18.507 "send_buf_size": 4096, 00:24:18.507 "enable_recv_pipe": true, 00:24:18.507 "enable_quickack": false, 00:24:18.507 "enable_placement_id": 0, 00:24:18.507 "enable_zerocopy_send_server": true, 00:24:18.507 "enable_zerocopy_send_client": false, 00:24:18.507 "zerocopy_threshold": 0, 00:24:18.507 "tls_version": 0, 00:24:18.507 "enable_ktls": false 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "sock_impl_set_options", 00:24:18.507 "params": { 00:24:18.507 "impl_name": "posix", 00:24:18.507 "recv_buf_size": 2097152, 00:24:18.507 "send_buf_size": 2097152, 00:24:18.507 "enable_recv_pipe": true, 00:24:18.507 "enable_quickack": false, 00:24:18.507 "enable_placement_id": 0, 00:24:18.507 "enable_zerocopy_send_server": true, 00:24:18.507 "enable_zerocopy_send_client": false, 00:24:18.507 "zerocopy_threshold": 0, 00:24:18.507 "tls_version": 0, 00:24:18.507 "enable_ktls": false 00:24:18.507 } 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "vmd", 00:24:18.507 "config": [] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "accel", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "accel_set_options", 00:24:18.507 "params": { 00:24:18.507 "small_cache_size": 128, 00:24:18.507 "large_cache_size": 16, 00:24:18.507 "task_count": 2048, 00:24:18.507 "sequence_count": 2048, 00:24:18.507 "buf_count": 2048 00:24:18.507 } 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "bdev", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "bdev_set_options", 00:24:18.507 "params": { 00:24:18.507 "bdev_io_pool_size": 65535, 00:24:18.507 "bdev_io_cache_size": 256, 00:24:18.507 "bdev_auto_examine": true, 00:24:18.507 "iobuf_small_cache_size": 128, 00:24:18.507 "iobuf_large_cache_size": 16 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_raid_set_options", 00:24:18.507 "params": { 00:24:18.507 "process_window_size_kb": 1024, 00:24:18.507 "process_max_bandwidth_mb_sec": 0 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_iscsi_set_options", 00:24:18.507 "params": { 00:24:18.507 "timeout_sec": 30 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_nvme_set_options", 00:24:18.507 "params": { 00:24:18.507 "action_on_timeout": "none", 00:24:18.507 "timeout_us": 0, 00:24:18.507 "timeout_admin_us": 0, 00:24:18.507 "keep_alive_timeout_ms": 10000, 00:24:18.507 "arbitration_burst": 0, 00:24:18.507 "low_priority_weight": 0, 00:24:18.507 "medium_priority_weight": 0, 00:24:18.507 "high_priority_weight": 0, 00:24:18.507 "nvme_adminq_poll_period_us": 10000, 00:24:18.507 "nvme_ioq_poll_period_us": 0, 00:24:18.507 "io_queue_requests": 0, 00:24:18.507 "delay_cmd_submit": true, 00:24:18.507 "transport_retry_count": 4, 00:24:18.507 "bdev_retry_count": 3, 00:24:18.507 "transport_ack_timeout": 0, 00:24:18.507 "ctrlr_loss_timeout_sec": 0, 00:24:18.507 "reconnect_delay_sec": 0, 00:24:18.507 "fast_io_fail_timeout_sec": 0, 00:24:18.507 "disable_auto_failback": false, 00:24:18.507 "generate_uuids": false, 00:24:18.507 "transport_tos": 0, 00:24:18.507 "nvme_error_stat": false, 00:24:18.507 "rdma_srq_size": 0, 00:24:18.507 "io_path_stat": false, 00:24:18.507 "allow_accel_sequence": false, 00:24:18.507 "rdma_max_cq_size": 0, 00:24:18.507 "rdma_cm_event_timeout_ms": 0, 00:24:18.507 "dhchap_digests": [ 00:24:18.507 "sha256", 00:24:18.507 "sha384", 00:24:18.507 "sha512" 00:24:18.507 ], 00:24:18.507 "dhchap_dhgroups": [ 00:24:18.507 "null", 00:24:18.507 "ffdhe2048", 00:24:18.507 "ffdhe3072", 00:24:18.507 "ffdhe4096", 00:24:18.507 "ffdhe6144", 00:24:18.507 "ffdhe8192" 00:24:18.507 ] 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_nvme_set_hotplug", 00:24:18.507 "params": { 00:24:18.507 "period_us": 100000, 00:24:18.507 "enable": false 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_malloc_create", 00:24:18.507 "params": { 00:24:18.507 "name": "malloc0", 00:24:18.507 "num_blocks": 8192, 00:24:18.507 "block_size": 4096, 00:24:18.507 "physical_block_size": 4096, 00:24:18.507 "uuid": "530eb30f-6fc3-4ffe-9eec-6b0771176bfe", 00:24:18.507 "optimal_io_boundary": 0, 00:24:18.507 "md_size": 0, 00:24:18.507 "dif_type": 0, 00:24:18.507 "dif_is_head_of_md": false, 00:24:18.507 "dif_pi_format": 0 00:24:18.507 } 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "method": "bdev_wait_for_examine" 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "nbd", 00:24:18.507 "config": [] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "scheduler", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "framework_set_scheduler", 00:24:18.507 "params": { 00:24:18.507 "name": "static" 00:24:18.507 } 00:24:18.507 } 00:24:18.507 ] 00:24:18.507 }, 00:24:18.507 { 00:24:18.507 "subsystem": "nvmf", 00:24:18.507 "config": [ 00:24:18.507 { 00:24:18.507 "method": "nvmf_set_config", 00:24:18.507 "params": { 00:24:18.507 "discovery_filter": "match_any", 00:24:18.507 "admin_cmd_passthru": { 00:24:18.507 "identify_ctrlr": false 00:24:18.507 }, 00:24:18.507 "dhchap_digests": [ 00:24:18.507 "sha256", 00:24:18.507 "sha384", 00:24:18.507 "sha512" 00:24:18.508 ], 00:24:18.508 "dhchap_dhgroups": [ 00:24:18.508 "null", 00:24:18.508 "ffdhe2048", 00:24:18.508 "ffdhe3072", 00:24:18.508 "ffdhe4096", 00:24:18.508 "ffdhe6144", 00:24:18.508 "ffdhe8192" 00:24:18.508 ] 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_set_max_subsystems", 00:24:18.508 "params": { 00:24:18.508 "max_subsystems": 1024 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_set_crdt", 00:24:18.508 "params": { 00:24:18.508 "crdt1": 0, 00:24:18.508 "crdt2": 0, 00:24:18.508 "crdt3": 0 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_create_transport", 00:24:18.508 "params": { 00:24:18.508 "trtype": "TCP", 00:24:18.508 "max_queue_depth": 128, 00:24:18.508 "max_io_qpairs_per_ctrlr": 127, 00:24:18.508 "in_capsule_data_size": 4096, 00:24:18.508 "max_io_size": 131072, 00:24:18.508 "io_unit_size": 131072, 00:24:18.508 "max_aq_depth": 128, 00:24:18.508 "num_shared_buffers": 511, 00:24:18.508 "buf_cache_size": 4294967295, 00:24:18.508 "dif_insert_or_strip": false, 00:24:18.508 "zcopy": false, 00:24:18.508 "c2h_success": false, 00:24:18.508 "sock_priority": 0, 00:24:18.508 "abort_timeout_sec": 1, 00:24:18.508 "ack_timeout": 0, 00:24:18.508 "data_wr_pool_size": 0 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_create_subsystem", 00:24:18.508 "params": { 00:24:18.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.508 "allow_any_host": false, 00:24:18.508 "serial_number": "SPDK00000000000001", 00:24:18.508 "model_number": "SPDK bdev Controller", 00:24:18.508 "max_namespaces": 10, 00:24:18.508 "min_cntlid": 1, 00:24:18.508 "max_cntlid": 65519, 00:24:18.508 "ana_reporting": false 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_subsystem_add_host", 00:24:18.508 "params": { 00:24:18.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.508 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.508 "psk": "key0" 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_subsystem_add_ns", 00:24:18.508 "params": { 00:24:18.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.508 "namespace": { 00:24:18.508 "nsid": 1, 00:24:18.508 "bdev_name": "malloc0", 00:24:18.508 "nguid": "530EB30F6FC34FFE9EEC6B0771176BFE", 00:24:18.508 "uuid": "530eb30f-6fc3-4ffe-9eec-6b0771176bfe", 00:24:18.508 "no_auto_visible": false 00:24:18.508 } 00:24:18.508 } 00:24:18.508 }, 00:24:18.508 { 00:24:18.508 "method": "nvmf_subsystem_add_listener", 00:24:18.508 "params": { 00:24:18.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.508 "listen_address": { 00:24:18.508 "trtype": "TCP", 00:24:18.508 "adrfam": "IPv4", 00:24:18.508 "traddr": "10.0.0.2", 00:24:18.508 "trsvcid": "4420" 00:24:18.508 }, 00:24:18.508 "secure_channel": true 00:24:18.508 } 00:24:18.508 } 00:24:18.508 ] 00:24:18.508 } 00:24:18.508 ] 00:24:18.508 }' 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1905704 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1905704 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1905704 ']' 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.508 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.769 [2024-10-09 11:04:38.525744] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:18.769 [2024-10-09 11:04:38.525802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.769 [2024-10-09 11:04:38.662114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:18.769 [2024-10-09 11:04:38.707411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.769 [2024-10-09 11:04:38.722998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.769 [2024-10-09 11:04:38.723026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.769 [2024-10-09 11:04:38.723032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.769 [2024-10-09 11:04:38.723037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.769 [2024-10-09 11:04:38.723041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.769 [2024-10-09 11:04:38.723529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.030 [2024-10-09 11:04:38.910087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.030 [2024-10-09 11:04:38.942042] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.030 [2024-10-09 11:04:38.942245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1905895 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1905895 /var/tmp/bdevperf.sock 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1905895 ']' 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.601 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:19.601 "subsystems": [ 00:24:19.601 { 00:24:19.601 "subsystem": "keyring", 00:24:19.601 "config": [ 00:24:19.601 { 00:24:19.601 "method": "keyring_file_add_key", 00:24:19.601 "params": { 00:24:19.601 "name": "key0", 00:24:19.601 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:19.601 } 00:24:19.601 } 00:24:19.601 ] 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "subsystem": "iobuf", 00:24:19.601 "config": [ 00:24:19.601 { 00:24:19.601 "method": "iobuf_set_options", 00:24:19.601 "params": { 00:24:19.601 "small_pool_count": 8192, 00:24:19.601 "large_pool_count": 1024, 00:24:19.601 "small_bufsize": 8192, 00:24:19.601 "large_bufsize": 135168 00:24:19.601 } 00:24:19.601 } 00:24:19.601 ] 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "subsystem": "sock", 00:24:19.601 "config": [ 00:24:19.601 { 00:24:19.601 "method": "sock_set_default_impl", 00:24:19.601 "params": { 00:24:19.601 "impl_name": "posix" 00:24:19.601 } 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "method": "sock_impl_set_options", 00:24:19.601 "params": { 00:24:19.601 "impl_name": "ssl", 00:24:19.601 "recv_buf_size": 4096, 00:24:19.601 "send_buf_size": 4096, 00:24:19.601 "enable_recv_pipe": true, 00:24:19.601 "enable_quickack": false, 00:24:19.601 "enable_placement_id": 0, 00:24:19.601 "enable_zerocopy_send_server": true, 00:24:19.601 "enable_zerocopy_send_client": false, 00:24:19.601 "zerocopy_threshold": 0, 00:24:19.601 "tls_version": 0, 00:24:19.601 "enable_ktls": false 00:24:19.601 } 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "method": "sock_impl_set_options", 00:24:19.601 "params": { 00:24:19.601 "impl_name": "posix", 00:24:19.601 "recv_buf_size": 2097152, 00:24:19.601 "send_buf_size": 2097152, 00:24:19.601 "enable_recv_pipe": true, 00:24:19.601 "enable_quickack": false, 00:24:19.601 "enable_placement_id": 0, 00:24:19.601 "enable_zerocopy_send_server": true, 00:24:19.601 "enable_zerocopy_send_client": false, 00:24:19.601 "zerocopy_threshold": 0, 00:24:19.601 "tls_version": 0, 00:24:19.601 "enable_ktls": false 00:24:19.601 } 00:24:19.601 } 00:24:19.601 ] 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "subsystem": "vmd", 00:24:19.601 "config": [] 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "subsystem": "accel", 00:24:19.601 "config": [ 00:24:19.601 { 00:24:19.601 "method": "accel_set_options", 00:24:19.601 "params": { 00:24:19.601 "small_cache_size": 128, 00:24:19.601 "large_cache_size": 16, 00:24:19.601 "task_count": 2048, 00:24:19.601 "sequence_count": 2048, 00:24:19.601 "buf_count": 2048 00:24:19.601 } 00:24:19.601 } 00:24:19.601 ] 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "subsystem": "bdev", 00:24:19.601 "config": [ 00:24:19.601 { 00:24:19.601 "method": "bdev_set_options", 00:24:19.601 "params": { 00:24:19.601 "bdev_io_pool_size": 65535, 00:24:19.601 "bdev_io_cache_size": 256, 00:24:19.601 "bdev_auto_examine": true, 00:24:19.601 "iobuf_small_cache_size": 128, 00:24:19.601 "iobuf_large_cache_size": 16 00:24:19.601 } 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "method": "bdev_raid_set_options", 00:24:19.601 "params": { 00:24:19.601 "process_window_size_kb": 1024, 00:24:19.601 "process_max_bandwidth_mb_sec": 0 00:24:19.601 } 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "method": "bdev_iscsi_set_options", 00:24:19.601 "params": { 00:24:19.601 "timeout_sec": 30 00:24:19.601 } 00:24:19.601 }, 00:24:19.601 { 00:24:19.601 "method": "bdev_nvme_set_options", 00:24:19.601 "params": { 00:24:19.602 "action_on_timeout": "none", 00:24:19.602 "timeout_us": 0, 00:24:19.602 "timeout_admin_us": 0, 00:24:19.602 "keep_alive_timeout_ms": 10000, 00:24:19.602 "arbitration_burst": 0, 00:24:19.602 "low_priority_weight": 0, 00:24:19.602 "medium_priority_weight": 0, 00:24:19.602 "high_priority_weight": 0, 00:24:19.602 "nvme_adminq_poll_period_us": 10000, 00:24:19.602 "nvme_ioq_poll_period_us": 0, 00:24:19.602 "io_queue_requests": 512, 00:24:19.602 "delay_cmd_submit": true, 00:24:19.602 "transport_retry_count": 4, 00:24:19.602 "bdev_retry_count": 3, 00:24:19.602 "transport_ack_timeout": 0, 00:24:19.602 "ctrlr_loss_timeout_sec": 0, 00:24:19.602 "reconnect_delay_sec": 0, 00:24:19.602 "fast_io_fail_timeout_sec": 0, 00:24:19.602 "disable_auto_failback": false, 00:24:19.602 "generate_uuids": false, 00:24:19.602 "transport_tos": 0, 00:24:19.602 "nvme_error_stat": false, 00:24:19.602 "rdma_srq_size": 0, 00:24:19.602 "io_path_stat": false, 00:24:19.602 "allow_accel_sequence": false, 00:24:19.602 "rdma_max_cq_size": 0, 00:24:19.602 "rdma_cm_event_timeout_ms": 0, 00:24:19.602 "dhchap_digests": [ 00:24:19.602 "sha256", 00:24:19.602 "sha384", 00:24:19.602 "sha512" 00:24:19.602 ], 00:24:19.602 "dhchap_dhgroups": [ 00:24:19.602 "null", 00:24:19.602 "ffdhe2048", 00:24:19.602 "ffdhe3072", 00:24:19.602 "ffdhe4096", 00:24:19.602 "ffdhe6144", 00:24:19.602 "ffdhe8192" 00:24:19.602 ] 00:24:19.602 } 00:24:19.602 }, 00:24:19.602 { 00:24:19.602 "method": "bdev_nvme_attach_controller", 00:24:19.602 "params": { 00:24:19.602 "name": "TLSTEST", 00:24:19.602 "trtype": "TCP", 00:24:19.602 "adrfam": "IPv4", 00:24:19.602 "traddr": "10.0.0.2", 00:24:19.602 "trsvcid": "4420", 00:24:19.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.602 "prchk_reftag": false, 00:24:19.602 "prchk_guard": false, 00:24:19.602 "ctrlr_loss_timeout_sec": 0, 00:24:19.602 "reconnect_delay_sec": 0, 00:24:19.602 "fast_io_fail_timeout_sec": 0, 00:24:19.602 "psk": "key0", 00:24:19.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.602 "hdgst": false, 00:24:19.602 "ddgst": false, 00:24:19.602 "multipath": "multipath" 00:24:19.602 } 00:24:19.602 }, 00:24:19.602 { 00:24:19.602 "method": "bdev_nvme_set_hotplug", 00:24:19.602 "params": { 00:24:19.602 "period_us": 100000, 00:24:19.602 "enable": false 00:24:19.602 } 00:24:19.602 }, 00:24:19.602 { 00:24:19.602 "method": "bdev_wait_for_examine" 00:24:19.602 } 00:24:19.602 ] 00:24:19.602 }, 00:24:19.602 { 00:24:19.602 "subsystem": "nbd", 00:24:19.602 "config": [] 00:24:19.602 } 00:24:19.602 ] 00:24:19.602 }' 00:24:19.602 [2024-10-09 11:04:39.390287] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:19.602 [2024-10-09 11:04:39.390341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905895 ] 00:24:19.602 [2024-10-09 11:04:39.520269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:19.602 [2024-10-09 11:04:39.541985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.602 [2024-10-09 11:04:39.558224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.862 [2024-10-09 11:04:39.686750] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.435 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.435 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:20.435 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:20.435 Running I/O for 10 seconds... 00:24:22.316 5879.00 IOPS, 22.96 MiB/s [2024-10-09T09:04:43.261Z] 5616.00 IOPS, 21.94 MiB/s [2024-10-09T09:04:44.655Z] 5559.00 IOPS, 21.71 MiB/s [2024-10-09T09:04:45.595Z] 5539.25 IOPS, 21.64 MiB/s [2024-10-09T09:04:46.535Z] 5707.00 IOPS, 22.29 MiB/s [2024-10-09T09:04:47.475Z] 5677.17 IOPS, 22.18 MiB/s [2024-10-09T09:04:48.418Z] 5753.00 IOPS, 22.47 MiB/s [2024-10-09T09:04:49.358Z] 5668.25 IOPS, 22.14 MiB/s [2024-10-09T09:04:50.302Z] 5654.78 IOPS, 22.09 MiB/s [2024-10-09T09:04:50.302Z] 5617.40 IOPS, 21.94 MiB/s 00:24:30.300 Latency(us) 00:24:30.300 [2024-10-09T09:04:50.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.300 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.300 Verification LBA range: start 0x0 length 0x2000 00:24:30.300 TLSTESTn1 : 10.02 5620.54 21.96 0.00 0.00 22738.94 4926.70 27917.94 00:24:30.300 [2024-10-09T09:04:50.302Z] =================================================================================================================== 00:24:30.300 [2024-10-09T09:04:50.302Z] Total : 5620.54 21.96 0.00 0.00 22738.94 4926.70 27917.94 00:24:30.300 { 00:24:30.300 "results": [ 00:24:30.300 { 00:24:30.300 "job": "TLSTESTn1", 00:24:30.300 "core_mask": "0x4", 00:24:30.300 "workload": "verify", 00:24:30.300 "status": "finished", 00:24:30.300 "verify_range": { 00:24:30.300 "start": 0, 00:24:30.300 "length": 8192 00:24:30.300 }, 00:24:30.300 "queue_depth": 128, 00:24:30.300 "io_size": 4096, 00:24:30.300 "runtime": 10.017008, 00:24:30.300 "iops": 5620.540584573757, 00:24:30.300 "mibps": 21.955236658491238, 00:24:30.300 "io_failed": 0, 00:24:30.300 "io_timeout": 0, 00:24:30.300 "avg_latency_us": 22738.940123237295, 00:24:30.300 "min_latency_us": 4926.695623120615, 00:24:30.300 "max_latency_us": 27917.94186435015 00:24:30.300 } 00:24:30.300 ], 00:24:30.300 "core_count": 1 00:24:30.300 } 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1905895 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1905895 ']' 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1905895 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.300 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1905895 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1905895' 00:24:30.562 killing process with pid 1905895 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1905895 00:24:30.562 Received shutdown signal, test time was about 10.000000 seconds 00:24:30.562 00:24:30.562 Latency(us) 00:24:30.562 [2024-10-09T09:04:50.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.562 [2024-10-09T09:04:50.564Z] =================================================================================================================== 00:24:30.562 [2024-10-09T09:04:50.564Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1905895 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1905704 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1905704 ']' 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1905704 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1905704 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1905704' 00:24:30.562 killing process with pid 1905704 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1905704 00:24:30.562 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1905704 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1908089 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1908089 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.823 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1908089 ']' 00:24:30.824 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.824 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.824 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.824 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.824 11:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.824 [2024-10-09 11:04:50.679668] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:30.824 [2024-10-09 11:04:50.679731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.824 [2024-10-09 11:04:50.815645] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:31.084 [2024-10-09 11:04:50.845891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.084 [2024-10-09 11:04:50.861791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.084 [2024-10-09 11:04:50.861818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.084 [2024-10-09 11:04:50.861826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.084 [2024-10-09 11:04:50.861833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.084 [2024-10-09 11:04:50.861838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.084 [2024-10-09 11:04:50.862413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.6zbeuGWMSQ 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.6zbeuGWMSQ 00:24:31.656 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:31.917 [2024-10-09 11:04:51.674156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.917 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:31.917 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:32.177 [2024-10-09 11:04:52.038219] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.177 [2024-10-09 11:04:52.038432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.177 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:32.437 malloc0 00:24:32.437 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:32.437 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:32.697 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1908457 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1908457 /var/tmp/bdevperf.sock 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1908457 ']' 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.958 11:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.958 [2024-10-09 11:04:52.822440] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:32.958 [2024-10-09 11:04:52.822495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908457 ] 00:24:32.958 [2024-10-09 11:04:52.952573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:33.218 [2024-10-09 11:04:52.999247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.218 [2024-10-09 11:04:53.015768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.790 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.790 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:33.790 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:33.790 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:34.051 [2024-10-09 11:04:53.935882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.051 nvme0n1 00:24:34.051 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.311 Running I/O for 1 seconds... 00:24:35.252 4072.00 IOPS, 15.91 MiB/s 00:24:35.252 Latency(us) 00:24:35.252 [2024-10-09T09:04:55.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.252 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:35.252 Verification LBA range: start 0x0 length 0x2000 00:24:35.252 nvme0n1 : 1.05 4003.41 15.64 0.00 0.00 31253.80 8046.94 46639.39 00:24:35.252 [2024-10-09T09:04:55.254Z] =================================================================================================================== 00:24:35.252 [2024-10-09T09:04:55.254Z] Total : 4003.41 15.64 0.00 0.00 31253.80 8046.94 46639.39 00:24:35.252 { 00:24:35.252 "results": [ 00:24:35.252 { 00:24:35.252 "job": "nvme0n1", 00:24:35.252 "core_mask": "0x2", 00:24:35.252 "workload": "verify", 00:24:35.252 "status": "finished", 00:24:35.252 "verify_range": { 00:24:35.252 "start": 0, 00:24:35.252 "length": 8192 00:24:35.252 }, 00:24:35.252 "queue_depth": 128, 00:24:35.252 "io_size": 4096, 00:24:35.252 "runtime": 1.049355, 00:24:35.252 "iops": 4003.4116195186566, 00:24:35.252 "mibps": 15.638326638744752, 00:24:35.252 "io_failed": 0, 00:24:35.252 "io_timeout": 0, 00:24:35.252 "avg_latency_us": 31253.804356479486, 00:24:35.252 "min_latency_us": 8046.936184430338, 00:24:35.252 "max_latency_us": 46639.385232208486 00:24:35.252 } 00:24:35.252 ], 00:24:35.252 "core_count": 1 00:24:35.252 } 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1908457 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1908457 ']' 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1908457 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1908457 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1908457' 00:24:35.252 killing process with pid 1908457 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1908457 00:24:35.252 Received shutdown signal, test time was about 1.000000 seconds 00:24:35.252 00:24:35.252 Latency(us) 00:24:35.252 [2024-10-09T09:04:55.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.252 [2024-10-09T09:04:55.254Z] =================================================================================================================== 00:24:35.252 [2024-10-09T09:04:55.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.252 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1908457 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1908089 ']' 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1908089' 00:24:35.512 killing process with pid 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1908089 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.512 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1909129 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1909129 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1909129 ']' 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.773 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.773 [2024-10-09 11:04:55.569453] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:35.773 [2024-10-09 11:04:55.569512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.773 [2024-10-09 11:04:55.705521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:35.773 [2024-10-09 11:04:55.737120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.773 [2024-10-09 11:04:55.752471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.773 [2024-10-09 11:04:55.752501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.773 [2024-10-09 11:04:55.752508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.773 [2024-10-09 11:04:55.752515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.773 [2024-10-09 11:04:55.752520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.773 [2024-10-09 11:04:55.753051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.713 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.714 [2024-10-09 11:04:56.400629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.714 malloc0 00:24:36.714 [2024-10-09 11:04:56.427312] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.714 [2024-10-09 11:04:56.427527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1909186 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1909186 /var/tmp/bdevperf.sock 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1909186 ']' 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:36.714 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.714 [2024-10-09 11:04:56.507508] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:36.714 [2024-10-09 11:04:56.507558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909186 ] 00:24:36.714 [2024-10-09 11:04:56.637747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:36.714 [2024-10-09 11:04:56.683407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.714 [2024-10-09 11:04:56.699808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.656 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.656 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:37.656 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6zbeuGWMSQ 00:24:37.656 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:37.656 [2024-10-09 11:04:57.627949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.916 nvme0n1 00:24:37.916 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.916 Running I/O for 1 seconds... 00:24:38.858 5856.00 IOPS, 22.88 MiB/s 00:24:38.858 Latency(us) 00:24:38.858 [2024-10-09T09:04:58.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.858 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:38.858 Verification LBA range: start 0x0 length 0x2000 00:24:38.858 nvme0n1 : 1.02 5874.79 22.95 0.00 0.00 21631.08 6897.37 26823.12 00:24:38.858 [2024-10-09T09:04:58.860Z] =================================================================================================================== 00:24:38.858 [2024-10-09T09:04:58.860Z] Total : 5874.79 22.95 0.00 0.00 21631.08 6897.37 26823.12 00:24:38.858 { 00:24:38.858 "results": [ 00:24:38.858 { 00:24:38.858 "job": "nvme0n1", 00:24:38.858 "core_mask": "0x2", 00:24:38.858 "workload": "verify", 00:24:38.858 "status": "finished", 00:24:38.858 "verify_range": { 00:24:38.858 "start": 0, 00:24:38.858 "length": 8192 00:24:38.858 }, 00:24:38.858 "queue_depth": 128, 00:24:38.858 "io_size": 4096, 00:24:38.858 "runtime": 1.01859, 00:24:38.858 "iops": 5874.787696718012, 00:24:38.858 "mibps": 22.948389440304734, 00:24:38.858 "io_failed": 0, 00:24:38.858 "io_timeout": 0, 00:24:38.858 "avg_latency_us": 21631.078577286396, 00:24:38.858 "min_latency_us": 6897.37387236886, 00:24:38.858 "max_latency_us": 26823.12061476779 00:24:38.858 } 00:24:38.858 ], 00:24:38.858 "core_count": 1 00:24:38.858 } 00:24:38.858 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:38.858 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.858 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.118 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.118 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:39.118 "subsystems": [ 00:24:39.118 { 00:24:39.118 "subsystem": "keyring", 00:24:39.118 "config": [ 00:24:39.118 { 00:24:39.118 "method": "keyring_file_add_key", 00:24:39.118 "params": { 00:24:39.118 "name": "key0", 00:24:39.118 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:39.118 } 00:24:39.118 } 00:24:39.118 ] 00:24:39.118 }, 00:24:39.118 { 00:24:39.118 "subsystem": "iobuf", 00:24:39.118 "config": [ 00:24:39.118 { 00:24:39.118 "method": "iobuf_set_options", 00:24:39.118 "params": { 00:24:39.118 "small_pool_count": 8192, 00:24:39.118 "large_pool_count": 1024, 00:24:39.118 "small_bufsize": 8192, 00:24:39.118 "large_bufsize": 135168 00:24:39.118 } 00:24:39.118 } 00:24:39.118 ] 00:24:39.118 }, 00:24:39.118 { 00:24:39.118 "subsystem": "sock", 00:24:39.118 "config": [ 00:24:39.118 { 00:24:39.118 "method": "sock_set_default_impl", 00:24:39.118 "params": { 00:24:39.118 "impl_name": "posix" 00:24:39.118 } 00:24:39.118 }, 00:24:39.118 { 00:24:39.118 "method": "sock_impl_set_options", 00:24:39.118 "params": { 00:24:39.118 "impl_name": "ssl", 00:24:39.118 "recv_buf_size": 4096, 00:24:39.118 "send_buf_size": 4096, 00:24:39.118 "enable_recv_pipe": true, 00:24:39.118 "enable_quickack": false, 00:24:39.118 "enable_placement_id": 0, 00:24:39.118 "enable_zerocopy_send_server": true, 00:24:39.118 "enable_zerocopy_send_client": false, 00:24:39.118 "zerocopy_threshold": 0, 00:24:39.118 "tls_version": 0, 00:24:39.118 "enable_ktls": false 00:24:39.118 } 00:24:39.118 }, 00:24:39.118 { 00:24:39.118 "method": "sock_impl_set_options", 00:24:39.118 "params": { 00:24:39.119 "impl_name": "posix", 00:24:39.119 "recv_buf_size": 2097152, 00:24:39.119 "send_buf_size": 2097152, 00:24:39.119 "enable_recv_pipe": true, 00:24:39.119 "enable_quickack": false, 00:24:39.119 "enable_placement_id": 0, 00:24:39.119 "enable_zerocopy_send_server": true, 00:24:39.119 "enable_zerocopy_send_client": false, 00:24:39.119 "zerocopy_threshold": 0, 00:24:39.119 "tls_version": 0, 00:24:39.119 "enable_ktls": false 00:24:39.119 } 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "vmd", 00:24:39.119 "config": [] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "accel", 00:24:39.119 "config": [ 00:24:39.119 { 00:24:39.119 "method": "accel_set_options", 00:24:39.119 "params": { 00:24:39.119 "small_cache_size": 128, 00:24:39.119 "large_cache_size": 16, 00:24:39.119 "task_count": 2048, 00:24:39.119 "sequence_count": 2048, 00:24:39.119 "buf_count": 2048 00:24:39.119 } 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "bdev", 00:24:39.119 "config": [ 00:24:39.119 { 00:24:39.119 "method": "bdev_set_options", 00:24:39.119 "params": { 00:24:39.119 "bdev_io_pool_size": 65535, 00:24:39.119 "bdev_io_cache_size": 256, 00:24:39.119 "bdev_auto_examine": true, 00:24:39.119 "iobuf_small_cache_size": 128, 00:24:39.119 "iobuf_large_cache_size": 16 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_raid_set_options", 00:24:39.119 "params": { 00:24:39.119 "process_window_size_kb": 1024, 00:24:39.119 "process_max_bandwidth_mb_sec": 0 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_iscsi_set_options", 00:24:39.119 "params": { 00:24:39.119 "timeout_sec": 30 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_nvme_set_options", 00:24:39.119 "params": { 00:24:39.119 "action_on_timeout": "none", 00:24:39.119 "timeout_us": 0, 00:24:39.119 "timeout_admin_us": 0, 00:24:39.119 "keep_alive_timeout_ms": 10000, 00:24:39.119 "arbitration_burst": 0, 00:24:39.119 "low_priority_weight": 0, 00:24:39.119 "medium_priority_weight": 0, 00:24:39.119 "high_priority_weight": 0, 00:24:39.119 "nvme_adminq_poll_period_us": 10000, 00:24:39.119 "nvme_ioq_poll_period_us": 0, 00:24:39.119 "io_queue_requests": 0, 00:24:39.119 "delay_cmd_submit": true, 00:24:39.119 "transport_retry_count": 4, 00:24:39.119 "bdev_retry_count": 3, 00:24:39.119 "transport_ack_timeout": 0, 00:24:39.119 "ctrlr_loss_timeout_sec": 0, 00:24:39.119 "reconnect_delay_sec": 0, 00:24:39.119 "fast_io_fail_timeout_sec": 0, 00:24:39.119 "disable_auto_failback": false, 00:24:39.119 "generate_uuids": false, 00:24:39.119 "transport_tos": 0, 00:24:39.119 "nvme_error_stat": false, 00:24:39.119 "rdma_srq_size": 0, 00:24:39.119 "io_path_stat": false, 00:24:39.119 "allow_accel_sequence": false, 00:24:39.119 "rdma_max_cq_size": 0, 00:24:39.119 "rdma_cm_event_timeout_ms": 0, 00:24:39.119 "dhchap_digests": [ 00:24:39.119 "sha256", 00:24:39.119 "sha384", 00:24:39.119 "sha512" 00:24:39.119 ], 00:24:39.119 "dhchap_dhgroups": [ 00:24:39.119 "null", 00:24:39.119 "ffdhe2048", 00:24:39.119 "ffdhe3072", 00:24:39.119 "ffdhe4096", 00:24:39.119 "ffdhe6144", 00:24:39.119 "ffdhe8192" 00:24:39.119 ] 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_nvme_set_hotplug", 00:24:39.119 "params": { 00:24:39.119 "period_us": 100000, 00:24:39.119 "enable": false 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_malloc_create", 00:24:39.119 "params": { 00:24:39.119 "name": "malloc0", 00:24:39.119 "num_blocks": 8192, 00:24:39.119 "block_size": 4096, 00:24:39.119 "physical_block_size": 4096, 00:24:39.119 "uuid": "1362c272-213b-4b03-837e-75b2bad86ecc", 00:24:39.119 "optimal_io_boundary": 0, 00:24:39.119 "md_size": 0, 00:24:39.119 "dif_type": 0, 00:24:39.119 "dif_is_head_of_md": false, 00:24:39.119 "dif_pi_format": 0 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "bdev_wait_for_examine" 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "nbd", 00:24:39.119 "config": [] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "scheduler", 00:24:39.119 "config": [ 00:24:39.119 { 00:24:39.119 "method": "framework_set_scheduler", 00:24:39.119 "params": { 00:24:39.119 "name": "static" 00:24:39.119 } 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "subsystem": "nvmf", 00:24:39.119 "config": [ 00:24:39.119 { 00:24:39.119 "method": "nvmf_set_config", 00:24:39.119 "params": { 00:24:39.119 "discovery_filter": "match_any", 00:24:39.119 "admin_cmd_passthru": { 00:24:39.119 "identify_ctrlr": false 00:24:39.119 }, 00:24:39.119 "dhchap_digests": [ 00:24:39.119 "sha256", 00:24:39.119 "sha384", 00:24:39.119 "sha512" 00:24:39.119 ], 00:24:39.119 "dhchap_dhgroups": [ 00:24:39.119 "null", 00:24:39.119 "ffdhe2048", 00:24:39.119 "ffdhe3072", 00:24:39.119 "ffdhe4096", 00:24:39.119 "ffdhe6144", 00:24:39.119 "ffdhe8192" 00:24:39.119 ] 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_set_max_subsystems", 00:24:39.119 "params": { 00:24:39.119 "max_subsystems": 1024 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_set_crdt", 00:24:39.119 "params": { 00:24:39.119 "crdt1": 0, 00:24:39.119 "crdt2": 0, 00:24:39.119 "crdt3": 0 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_create_transport", 00:24:39.119 "params": { 00:24:39.119 "trtype": "TCP", 00:24:39.119 "max_queue_depth": 128, 00:24:39.119 "max_io_qpairs_per_ctrlr": 127, 00:24:39.119 "in_capsule_data_size": 4096, 00:24:39.119 "max_io_size": 131072, 00:24:39.119 "io_unit_size": 131072, 00:24:39.119 "max_aq_depth": 128, 00:24:39.119 "num_shared_buffers": 511, 00:24:39.119 "buf_cache_size": 4294967295, 00:24:39.119 "dif_insert_or_strip": false, 00:24:39.119 "zcopy": false, 00:24:39.119 "c2h_success": false, 00:24:39.119 "sock_priority": 0, 00:24:39.119 "abort_timeout_sec": 1, 00:24:39.119 "ack_timeout": 0, 00:24:39.119 "data_wr_pool_size": 0 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_create_subsystem", 00:24:39.119 "params": { 00:24:39.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.119 "allow_any_host": false, 00:24:39.119 "serial_number": "00000000000000000000", 00:24:39.119 "model_number": "SPDK bdev Controller", 00:24:39.119 "max_namespaces": 32, 00:24:39.119 "min_cntlid": 1, 00:24:39.119 "max_cntlid": 65519, 00:24:39.119 "ana_reporting": false 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_subsystem_add_host", 00:24:39.119 "params": { 00:24:39.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.119 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.119 "psk": "key0" 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_subsystem_add_ns", 00:24:39.119 "params": { 00:24:39.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.119 "namespace": { 00:24:39.119 "nsid": 1, 00:24:39.119 "bdev_name": "malloc0", 00:24:39.119 "nguid": "1362C272213B4B03837E75B2BAD86ECC", 00:24:39.119 "uuid": "1362c272-213b-4b03-837e-75b2bad86ecc", 00:24:39.119 "no_auto_visible": false 00:24:39.119 } 00:24:39.119 } 00:24:39.119 }, 00:24:39.119 { 00:24:39.119 "method": "nvmf_subsystem_add_listener", 00:24:39.119 "params": { 00:24:39.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.119 "listen_address": { 00:24:39.119 "trtype": "TCP", 00:24:39.119 "adrfam": "IPv4", 00:24:39.119 "traddr": "10.0.0.2", 00:24:39.119 "trsvcid": "4420" 00:24:39.119 }, 00:24:39.119 "secure_channel": false, 00:24:39.119 "sock_impl": "ssl" 00:24:39.119 } 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 } 00:24:39.119 ] 00:24:39.119 }' 00:24:39.119 11:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:39.380 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:39.380 "subsystems": [ 00:24:39.380 { 00:24:39.380 "subsystem": "keyring", 00:24:39.380 "config": [ 00:24:39.380 { 00:24:39.380 "method": "keyring_file_add_key", 00:24:39.380 "params": { 00:24:39.380 "name": "key0", 00:24:39.380 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:39.380 } 00:24:39.380 } 00:24:39.380 ] 00:24:39.380 }, 00:24:39.380 { 00:24:39.380 "subsystem": "iobuf", 00:24:39.380 "config": [ 00:24:39.380 { 00:24:39.380 "method": "iobuf_set_options", 00:24:39.380 "params": { 00:24:39.380 "small_pool_count": 8192, 00:24:39.380 "large_pool_count": 1024, 00:24:39.380 "small_bufsize": 8192, 00:24:39.380 "large_bufsize": 135168 00:24:39.380 } 00:24:39.380 } 00:24:39.380 ] 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "subsystem": "sock", 00:24:39.381 "config": [ 00:24:39.381 { 00:24:39.381 "method": "sock_set_default_impl", 00:24:39.381 "params": { 00:24:39.381 "impl_name": "posix" 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "sock_impl_set_options", 00:24:39.381 "params": { 00:24:39.381 "impl_name": "ssl", 00:24:39.381 "recv_buf_size": 4096, 00:24:39.381 "send_buf_size": 4096, 00:24:39.381 "enable_recv_pipe": true, 00:24:39.381 "enable_quickack": false, 00:24:39.381 "enable_placement_id": 0, 00:24:39.381 "enable_zerocopy_send_server": true, 00:24:39.381 "enable_zerocopy_send_client": false, 00:24:39.381 "zerocopy_threshold": 0, 00:24:39.381 "tls_version": 0, 00:24:39.381 "enable_ktls": false 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "sock_impl_set_options", 00:24:39.381 "params": { 00:24:39.381 "impl_name": "posix", 00:24:39.381 "recv_buf_size": 2097152, 00:24:39.381 "send_buf_size": 2097152, 00:24:39.381 "enable_recv_pipe": true, 00:24:39.381 "enable_quickack": false, 00:24:39.381 "enable_placement_id": 0, 00:24:39.381 "enable_zerocopy_send_server": true, 00:24:39.381 "enable_zerocopy_send_client": false, 00:24:39.381 "zerocopy_threshold": 0, 00:24:39.381 "tls_version": 0, 00:24:39.381 "enable_ktls": false 00:24:39.381 } 00:24:39.381 } 00:24:39.381 ] 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "subsystem": "vmd", 00:24:39.381 "config": [] 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "subsystem": "accel", 00:24:39.381 "config": [ 00:24:39.381 { 00:24:39.381 "method": "accel_set_options", 00:24:39.381 "params": { 00:24:39.381 "small_cache_size": 128, 00:24:39.381 "large_cache_size": 16, 00:24:39.381 "task_count": 2048, 00:24:39.381 "sequence_count": 2048, 00:24:39.381 "buf_count": 2048 00:24:39.381 } 00:24:39.381 } 00:24:39.381 ] 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "subsystem": "bdev", 00:24:39.381 "config": [ 00:24:39.381 { 00:24:39.381 "method": "bdev_set_options", 00:24:39.381 "params": { 00:24:39.381 "bdev_io_pool_size": 65535, 00:24:39.381 "bdev_io_cache_size": 256, 00:24:39.381 "bdev_auto_examine": true, 00:24:39.381 "iobuf_small_cache_size": 128, 00:24:39.381 "iobuf_large_cache_size": 16 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_raid_set_options", 00:24:39.381 "params": { 00:24:39.381 "process_window_size_kb": 1024, 00:24:39.381 "process_max_bandwidth_mb_sec": 0 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_iscsi_set_options", 00:24:39.381 "params": { 00:24:39.381 "timeout_sec": 30 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_nvme_set_options", 00:24:39.381 "params": { 00:24:39.381 "action_on_timeout": "none", 00:24:39.381 "timeout_us": 0, 00:24:39.381 "timeout_admin_us": 0, 00:24:39.381 "keep_alive_timeout_ms": 10000, 00:24:39.381 "arbitration_burst": 0, 00:24:39.381 "low_priority_weight": 0, 00:24:39.381 "medium_priority_weight": 0, 00:24:39.381 "high_priority_weight": 0, 00:24:39.381 "nvme_adminq_poll_period_us": 10000, 00:24:39.381 "nvme_ioq_poll_period_us": 0, 00:24:39.381 "io_queue_requests": 512, 00:24:39.381 "delay_cmd_submit": true, 00:24:39.381 "transport_retry_count": 4, 00:24:39.381 "bdev_retry_count": 3, 00:24:39.381 "transport_ack_timeout": 0, 00:24:39.381 "ctrlr_loss_timeout_sec": 0, 00:24:39.381 "reconnect_delay_sec": 0, 00:24:39.381 "fast_io_fail_timeout_sec": 0, 00:24:39.381 "disable_auto_failback": false, 00:24:39.381 "generate_uuids": false, 00:24:39.381 "transport_tos": 0, 00:24:39.381 "nvme_error_stat": false, 00:24:39.381 "rdma_srq_size": 0, 00:24:39.381 "io_path_stat": false, 00:24:39.381 "allow_accel_sequence": false, 00:24:39.381 "rdma_max_cq_size": 0, 00:24:39.381 "rdma_cm_event_timeout_ms": 0, 00:24:39.381 "dhchap_digests": [ 00:24:39.381 "sha256", 00:24:39.381 "sha384", 00:24:39.381 "sha512" 00:24:39.381 ], 00:24:39.381 "dhchap_dhgroups": [ 00:24:39.381 "null", 00:24:39.381 "ffdhe2048", 00:24:39.381 "ffdhe3072", 00:24:39.381 "ffdhe4096", 00:24:39.381 "ffdhe6144", 00:24:39.381 "ffdhe8192" 00:24:39.381 ] 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_nvme_attach_controller", 00:24:39.381 "params": { 00:24:39.381 "name": "nvme0", 00:24:39.381 "trtype": "TCP", 00:24:39.381 "adrfam": "IPv4", 00:24:39.381 "traddr": "10.0.0.2", 00:24:39.381 "trsvcid": "4420", 00:24:39.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.381 "prchk_reftag": false, 00:24:39.381 "prchk_guard": false, 00:24:39.381 "ctrlr_loss_timeout_sec": 0, 00:24:39.381 "reconnect_delay_sec": 0, 00:24:39.381 "fast_io_fail_timeout_sec": 0, 00:24:39.381 "psk": "key0", 00:24:39.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.381 "hdgst": false, 00:24:39.381 "ddgst": false, 00:24:39.381 "multipath": "multipath" 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_nvme_set_hotplug", 00:24:39.381 "params": { 00:24:39.381 "period_us": 100000, 00:24:39.381 "enable": false 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_enable_histogram", 00:24:39.381 "params": { 00:24:39.381 "name": "nvme0n1", 00:24:39.381 "enable": true 00:24:39.381 } 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "method": "bdev_wait_for_examine" 00:24:39.381 } 00:24:39.381 ] 00:24:39.381 }, 00:24:39.381 { 00:24:39.381 "subsystem": "nbd", 00:24:39.381 "config": [] 00:24:39.381 } 00:24:39.381 ] 00:24:39.381 }' 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1909186 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1909186 ']' 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1909186 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1909186 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1909186' 00:24:39.381 killing process with pid 1909186 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1909186 00:24:39.381 Received shutdown signal, test time was about 1.000000 seconds 00:24:39.381 00:24:39.381 Latency(us) 00:24:39.381 [2024-10-09T09:04:59.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.381 [2024-10-09T09:04:59.383Z] =================================================================================================================== 00:24:39.381 [2024-10-09T09:04:59.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1909186 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1909129 00:24:39.381 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1909129 ']' 00:24:39.382 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1909129 00:24:39.382 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1909129 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1909129' 00:24:39.642 killing process with pid 1909129 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1909129 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1909129 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.642 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:39.642 "subsystems": [ 00:24:39.642 { 00:24:39.642 "subsystem": "keyring", 00:24:39.642 "config": [ 00:24:39.642 { 00:24:39.642 "method": "keyring_file_add_key", 00:24:39.642 "params": { 00:24:39.642 "name": "key0", 00:24:39.642 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:39.642 } 00:24:39.642 } 00:24:39.642 ] 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "subsystem": "iobuf", 00:24:39.642 "config": [ 00:24:39.642 { 00:24:39.642 "method": "iobuf_set_options", 00:24:39.642 "params": { 00:24:39.642 "small_pool_count": 8192, 00:24:39.642 "large_pool_count": 1024, 00:24:39.642 "small_bufsize": 8192, 00:24:39.642 "large_bufsize": 135168 00:24:39.642 } 00:24:39.642 } 00:24:39.642 ] 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "subsystem": "sock", 00:24:39.642 "config": [ 00:24:39.642 { 00:24:39.642 "method": "sock_set_default_impl", 00:24:39.642 "params": { 00:24:39.642 "impl_name": "posix" 00:24:39.642 } 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "method": "sock_impl_set_options", 00:24:39.642 "params": { 00:24:39.642 "impl_name": "ssl", 00:24:39.642 "recv_buf_size": 4096, 00:24:39.642 "send_buf_size": 4096, 00:24:39.642 "enable_recv_pipe": true, 00:24:39.642 "enable_quickack": false, 00:24:39.642 "enable_placement_id": 0, 00:24:39.642 "enable_zerocopy_send_server": true, 00:24:39.642 "enable_zerocopy_send_client": false, 00:24:39.642 "zerocopy_threshold": 0, 00:24:39.642 "tls_version": 0, 00:24:39.642 "enable_ktls": false 00:24:39.642 } 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "method": "sock_impl_set_options", 00:24:39.642 "params": { 00:24:39.642 "impl_name": "posix", 00:24:39.642 "recv_buf_size": 2097152, 00:24:39.642 "send_buf_size": 2097152, 00:24:39.642 "enable_recv_pipe": true, 00:24:39.642 "enable_quickack": false, 00:24:39.642 "enable_placement_id": 0, 00:24:39.642 "enable_zerocopy_send_server": true, 00:24:39.642 "enable_zerocopy_send_client": false, 00:24:39.642 "zerocopy_threshold": 0, 00:24:39.642 "tls_version": 0, 00:24:39.642 "enable_ktls": false 00:24:39.642 } 00:24:39.642 } 00:24:39.642 ] 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "subsystem": "vmd", 00:24:39.642 "config": [] 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "subsystem": "accel", 00:24:39.642 "config": [ 00:24:39.642 { 00:24:39.642 "method": "accel_set_options", 00:24:39.642 "params": { 00:24:39.642 "small_cache_size": 128, 00:24:39.642 "large_cache_size": 16, 00:24:39.642 "task_count": 2048, 00:24:39.642 "sequence_count": 2048, 00:24:39.642 "buf_count": 2048 00:24:39.642 } 00:24:39.642 } 00:24:39.642 ] 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "subsystem": "bdev", 00:24:39.642 "config": [ 00:24:39.642 { 00:24:39.642 "method": "bdev_set_options", 00:24:39.642 "params": { 00:24:39.642 "bdev_io_pool_size": 65535, 00:24:39.642 "bdev_io_cache_size": 256, 00:24:39.642 "bdev_auto_examine": true, 00:24:39.642 "iobuf_small_cache_size": 128, 00:24:39.642 "iobuf_large_cache_size": 16 00:24:39.642 } 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "method": "bdev_raid_set_options", 00:24:39.642 "params": { 00:24:39.642 "process_window_size_kb": 1024, 00:24:39.642 "process_max_bandwidth_mb_sec": 0 00:24:39.642 } 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "method": "bdev_iscsi_set_options", 00:24:39.642 "params": { 00:24:39.642 "timeout_sec": 30 00:24:39.642 } 00:24:39.642 }, 00:24:39.642 { 00:24:39.642 "method": "bdev_nvme_set_options", 00:24:39.642 "params": { 00:24:39.642 "action_on_timeout": "none", 00:24:39.643 "timeout_us": 0, 00:24:39.643 "timeout_admin_us": 0, 00:24:39.643 "keep_alive_timeout_ms": 10000, 00:24:39.643 "arbitration_burst": 0, 00:24:39.643 "low_priority_weight": 0, 00:24:39.643 "medium_priority_weight": 0, 00:24:39.643 "high_priority_weight": 0, 00:24:39.643 "nvme_adminq_poll_period_us": 10000, 00:24:39.643 "nvme_ioq_poll_period_us": 0, 00:24:39.643 "io_queue_requests": 0, 00:24:39.643 "delay_cmd_submit": true, 00:24:39.643 "transport_retry_count": 4, 00:24:39.643 "bdev_retry_count": 3, 00:24:39.643 "transport_ack_timeout": 0, 00:24:39.643 "ctrlr_loss_timeout_sec": 0, 00:24:39.643 "reconnect_delay_sec": 0, 00:24:39.643 "fast_io_fail_timeout_sec": 0, 00:24:39.643 "disable_auto_failback": false, 00:24:39.643 "generate_uuids": false, 00:24:39.643 "transport_tos": 0, 00:24:39.643 "nvme_error_stat": false, 00:24:39.643 "rdma_srq_size": 0, 00:24:39.643 "io_path_stat": false, 00:24:39.643 "allow_accel_sequence": false, 00:24:39.643 "rdma_max_cq_size": 0, 00:24:39.643 "rdma_cm_event_timeout_ms": 0, 00:24:39.643 "dhchap_digests": [ 00:24:39.643 "sha256", 00:24:39.643 "sha384", 00:24:39.643 "sha512" 00:24:39.643 ], 00:24:39.643 "dhchap_dhgroups": [ 00:24:39.643 "null", 00:24:39.643 "ffdhe2048", 00:24:39.643 "ffdhe3072", 00:24:39.643 "ffdhe4096", 00:24:39.643 "ffdhe6144", 00:24:39.643 "ffdhe8192" 00:24:39.643 ] 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "bdev_nvme_set_hotplug", 00:24:39.643 "params": { 00:24:39.643 "period_us": 100000, 00:24:39.643 "enable": false 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "bdev_malloc_create", 00:24:39.643 "params": { 00:24:39.643 "name": "malloc0", 00:24:39.643 "num_blocks": 8192, 00:24:39.643 "block_size": 4096, 00:24:39.643 "physical_block_size": 4096, 00:24:39.643 "uuid": "1362c272-213b-4b03-837e-75b2bad86ecc", 00:24:39.643 "optimal_io_boundary": 0, 00:24:39.643 "md_size": 0, 00:24:39.643 "dif_type": 0, 00:24:39.643 "dif_is_head_of_md": false, 00:24:39.643 "dif_pi_format": 0 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "bdev_wait_for_examine" 00:24:39.643 } 00:24:39.643 ] 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "subsystem": "nbd", 00:24:39.643 "config": [] 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "subsystem": "scheduler", 00:24:39.643 "config": [ 00:24:39.643 { 00:24:39.643 "method": "framework_set_scheduler", 00:24:39.643 "params": { 00:24:39.643 "name": "static" 00:24:39.643 } 00:24:39.643 } 00:24:39.643 ] 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "subsystem": "nvmf", 00:24:39.643 "config": [ 00:24:39.643 { 00:24:39.643 "method": "nvmf_set_config", 00:24:39.643 "params": { 00:24:39.643 "discovery_filter": "match_any", 00:24:39.643 "admin_cmd_passthru": { 00:24:39.643 "identify_ctrlr": false 00:24:39.643 }, 00:24:39.643 "dhchap_digests": [ 00:24:39.643 "sha256", 00:24:39.643 "sha384", 00:24:39.643 "sha512" 00:24:39.643 ], 00:24:39.643 "dhchap_dhgroups": [ 00:24:39.643 "null", 00:24:39.643 "ffdhe2048", 00:24:39.643 "ffdhe3072", 00:24:39.643 "ffdhe4096", 00:24:39.643 "ffdhe6144", 00:24:39.643 "ffdhe8192" 00:24:39.643 ] 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_set_max_subsystems", 00:24:39.643 "params": { 00:24:39.643 "max_subsystems": 1024 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_set_crdt", 00:24:39.643 "params": { 00:24:39.643 "crdt1": 0, 00:24:39.643 "crdt2": 0, 00:24:39.643 "crdt3": 0 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_create_transport", 00:24:39.643 "params": { 00:24:39.643 "trtype": "TCP", 00:24:39.643 "max_queue_depth": 128, 00:24:39.643 "max_io_qpairs_per_ctrlr": 127, 00:24:39.643 "in_capsule_data_size": 4096, 00:24:39.643 "max_io_size": 131072, 00:24:39.643 "io_unit_size": 131072, 00:24:39.643 "max_aq_depth": 128, 00:24:39.643 "num_shared_buffers": 511, 00:24:39.643 "buf_cache_size": 4294967295, 00:24:39.643 "dif_insert_or_strip": false, 00:24:39.643 "zcopy": false, 00:24:39.643 "c2h_success": false, 00:24:39.643 "sock_priority": 0, 00:24:39.643 "abort_timeout_sec": 1, 00:24:39.643 "ack_timeout": 0, 00:24:39.643 "data_wr_pool_size": 0 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_create_subsystem", 00:24:39.643 "params": { 00:24:39.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.643 "allow_any_host": false, 00:24:39.643 "serial_number": "00000000000000000000", 00:24:39.643 "model_number": "SPDK bdev Controller", 00:24:39.643 "max_namespaces": 32, 00:24:39.643 "min_cntlid": 1, 00:24:39.643 "max_cntlid": 65519, 00:24:39.643 "ana_reporting": false 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_subsystem_add_host", 00:24:39.643 "params": { 00:24:39.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.643 "host": "nqn.2016-06.io.spdk:host1", 00:24:39.643 "psk": "key0" 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_subsystem_add_ns", 00:24:39.643 "params": { 00:24:39.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.643 "namespace": { 00:24:39.643 "nsid": 1, 00:24:39.643 "bdev_name": "malloc0", 00:24:39.643 "nguid": "1362C272213B4B03837E75B2BAD86ECC", 00:24:39.643 "uuid": "1362c272-213b-4b03-837e-75b2bad86ecc", 00:24:39.643 "no_auto_visible": false 00:24:39.643 } 00:24:39.643 } 00:24:39.643 }, 00:24:39.643 { 00:24:39.643 "method": "nvmf_subsystem_add_listener", 00:24:39.643 "params": { 00:24:39.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.643 "listen_address": { 00:24:39.643 "trtype": "TCP", 00:24:39.643 "adrfam": "IPv4", 00:24:39.643 "traddr": "10.0.0.2", 00:24:39.643 "trsvcid": "4420" 00:24:39.643 }, 00:24:39.643 "secure_channel": false, 00:24:39.643 "sock_impl": "ssl" 00:24:39.643 } 00:24:39.643 } 00:24:39.643 ] 00:24:39.643 } 00:24:39.643 ] 00:24:39.643 }' 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1909843 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1909843 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1909843 ']' 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:39.643 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.643 [2024-10-09 11:04:59.618154] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:39.643 [2024-10-09 11:04:59.618211] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.903 [2024-10-09 11:04:59.754980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:39.903 [2024-10-09 11:04:59.786765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.903 [2024-10-09 11:04:59.807821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.903 [2024-10-09 11:04:59.807864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.903 [2024-10-09 11:04:59.807872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.903 [2024-10-09 11:04:59.807879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.903 [2024-10-09 11:04:59.807885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.903 [2024-10-09 11:04:59.808628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.163 [2024-10-09 11:05:00.000524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.163 [2024-10-09 11:05:00.032457] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:40.163 [2024-10-09 11:05:00.032688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1910143 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1910143 /var/tmp/bdevperf.sock 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1910143 ']' 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.736 11:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:40.736 "subsystems": [ 00:24:40.736 { 00:24:40.736 "subsystem": "keyring", 00:24:40.736 "config": [ 00:24:40.736 { 00:24:40.736 "method": "keyring_file_add_key", 00:24:40.736 "params": { 00:24:40.736 "name": "key0", 00:24:40.736 "path": "/tmp/tmp.6zbeuGWMSQ" 00:24:40.736 } 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "iobuf", 00:24:40.736 "config": [ 00:24:40.736 { 00:24:40.736 "method": "iobuf_set_options", 00:24:40.736 "params": { 00:24:40.736 "small_pool_count": 8192, 00:24:40.736 "large_pool_count": 1024, 00:24:40.736 "small_bufsize": 8192, 00:24:40.736 "large_bufsize": 135168 00:24:40.736 } 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "sock", 00:24:40.736 "config": [ 00:24:40.736 { 00:24:40.736 "method": "sock_set_default_impl", 00:24:40.736 "params": { 00:24:40.736 "impl_name": "posix" 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "sock_impl_set_options", 00:24:40.736 "params": { 00:24:40.736 "impl_name": "ssl", 00:24:40.736 "recv_buf_size": 4096, 00:24:40.736 "send_buf_size": 4096, 00:24:40.736 "enable_recv_pipe": true, 00:24:40.736 "enable_quickack": false, 00:24:40.736 "enable_placement_id": 0, 00:24:40.736 "enable_zerocopy_send_server": true, 00:24:40.736 "enable_zerocopy_send_client": false, 00:24:40.736 "zerocopy_threshold": 0, 00:24:40.736 "tls_version": 0, 00:24:40.736 "enable_ktls": false 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "sock_impl_set_options", 00:24:40.736 "params": { 00:24:40.736 "impl_name": "posix", 00:24:40.736 "recv_buf_size": 2097152, 00:24:40.736 "send_buf_size": 2097152, 00:24:40.736 "enable_recv_pipe": true, 00:24:40.736 "enable_quickack": false, 00:24:40.736 "enable_placement_id": 0, 00:24:40.736 "enable_zerocopy_send_server": true, 00:24:40.736 "enable_zerocopy_send_client": false, 00:24:40.736 "zerocopy_threshold": 0, 00:24:40.736 "tls_version": 0, 00:24:40.736 "enable_ktls": false 00:24:40.736 } 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "vmd", 00:24:40.736 "config": [] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "accel", 00:24:40.736 "config": [ 00:24:40.736 { 00:24:40.736 "method": "accel_set_options", 00:24:40.736 "params": { 00:24:40.736 "small_cache_size": 128, 00:24:40.736 "large_cache_size": 16, 00:24:40.736 "task_count": 2048, 00:24:40.736 "sequence_count": 2048, 00:24:40.736 "buf_count": 2048 00:24:40.736 } 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "bdev", 00:24:40.736 "config": [ 00:24:40.736 { 00:24:40.736 "method": "bdev_set_options", 00:24:40.736 "params": { 00:24:40.736 "bdev_io_pool_size": 65535, 00:24:40.736 "bdev_io_cache_size": 256, 00:24:40.736 "bdev_auto_examine": true, 00:24:40.736 "iobuf_small_cache_size": 128, 00:24:40.736 "iobuf_large_cache_size": 16 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_raid_set_options", 00:24:40.736 "params": { 00:24:40.736 "process_window_size_kb": 1024, 00:24:40.736 "process_max_bandwidth_mb_sec": 0 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_iscsi_set_options", 00:24:40.736 "params": { 00:24:40.736 "timeout_sec": 30 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_nvme_set_options", 00:24:40.736 "params": { 00:24:40.736 "action_on_timeout": "none", 00:24:40.736 "timeout_us": 0, 00:24:40.736 "timeout_admin_us": 0, 00:24:40.736 "keep_alive_timeout_ms": 10000, 00:24:40.736 "arbitration_burst": 0, 00:24:40.736 "low_priority_weight": 0, 00:24:40.736 "medium_priority_weight": 0, 00:24:40.736 "high_priority_weight": 0, 00:24:40.736 "nvme_adminq_poll_period_us": 10000, 00:24:40.736 "nvme_ioq_poll_period_us": 0, 00:24:40.736 "io_queue_requests": 512, 00:24:40.736 "delay_cmd_submit": true, 00:24:40.736 "transport_retry_count": 4, 00:24:40.736 "bdev_retry_count": 3, 00:24:40.736 "transport_ack_timeout": 0, 00:24:40.736 "ctrlr_loss_timeout_sec": 0, 00:24:40.736 "reconnect_delay_sec": 0, 00:24:40.736 "fast_io_fail_timeout_sec": 0, 00:24:40.736 "disable_auto_failback": false, 00:24:40.736 "generate_uuids": false, 00:24:40.736 "transport_tos": 0, 00:24:40.736 "nvme_error_stat": false, 00:24:40.736 "rdma_srq_size": 0, 00:24:40.736 "io_path_stat": false, 00:24:40.736 "allow_accel_sequence": false, 00:24:40.736 "rdma_max_cq_size": 0, 00:24:40.736 "rdma_cm_event_timeout_ms": 0, 00:24:40.736 "dhchap_digests": [ 00:24:40.736 "sha256", 00:24:40.736 "sha384", 00:24:40.736 "sha512" 00:24:40.736 ], 00:24:40.736 "dhchap_dhgroups": [ 00:24:40.736 "null", 00:24:40.736 "ffdhe2048", 00:24:40.736 "ffdhe3072", 00:24:40.736 "ffdhe4096", 00:24:40.736 "ffdhe6144", 00:24:40.736 "ffdhe8192" 00:24:40.736 ] 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_nvme_attach_controller", 00:24:40.736 "params": { 00:24:40.736 "name": "nvme0", 00:24:40.736 "trtype": "TCP", 00:24:40.736 "adrfam": "IPv4", 00:24:40.736 "traddr": "10.0.0.2", 00:24:40.736 "trsvcid": "4420", 00:24:40.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.736 "prchk_reftag": false, 00:24:40.736 "prchk_guard": false, 00:24:40.736 "ctrlr_loss_timeout_sec": 0, 00:24:40.736 "reconnect_delay_sec": 0, 00:24:40.736 "fast_io_fail_timeout_sec": 0, 00:24:40.736 "psk": "key0", 00:24:40.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.736 "hdgst": false, 00:24:40.736 "ddgst": false, 00:24:40.736 "multipath": "multipath" 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_nvme_set_hotplug", 00:24:40.736 "params": { 00:24:40.736 "period_us": 100000, 00:24:40.736 "enable": false 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_enable_histogram", 00:24:40.736 "params": { 00:24:40.736 "name": "nvme0n1", 00:24:40.736 "enable": true 00:24:40.736 } 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "method": "bdev_wait_for_examine" 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }, 00:24:40.736 { 00:24:40.736 "subsystem": "nbd", 00:24:40.736 "config": [] 00:24:40.736 } 00:24:40.736 ] 00:24:40.736 }' 00:24:40.737 [2024-10-09 11:05:00.530444] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:40.737 [2024-10-09 11:05:00.530502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910143 ] 00:24:40.737 [2024-10-09 11:05:00.660872] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:40.737 [2024-10-09 11:05:00.708424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.737 [2024-10-09 11:05:00.724729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.997 [2024-10-09 11:05:00.853922] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.566 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.566 Running I/O for 1 seconds... 00:24:42.951 3784.00 IOPS, 14.78 MiB/s 00:24:42.951 Latency(us) 00:24:42.951 [2024-10-09T09:05:02.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.951 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.951 Verification LBA range: start 0x0 length 0x2000 00:24:42.951 nvme0n1 : 1.02 3847.89 15.03 0.00 0.00 32991.44 6240.48 91527.06 00:24:42.951 [2024-10-09T09:05:02.953Z] =================================================================================================================== 00:24:42.951 [2024-10-09T09:05:02.953Z] Total : 3847.89 15.03 0.00 0.00 32991.44 6240.48 91527.06 00:24:42.951 { 00:24:42.951 "results": [ 00:24:42.951 { 00:24:42.951 "job": "nvme0n1", 00:24:42.951 "core_mask": "0x2", 00:24:42.951 "workload": "verify", 00:24:42.951 "status": "finished", 00:24:42.951 "verify_range": { 00:24:42.951 "start": 0, 00:24:42.951 "length": 8192 00:24:42.951 }, 00:24:42.951 "queue_depth": 128, 00:24:42.951 "io_size": 4096, 00:24:42.951 "runtime": 1.016921, 00:24:42.951 "iops": 3847.88985575084, 00:24:42.951 "mibps": 15.030819749026719, 00:24:42.951 "io_failed": 0, 00:24:42.951 "io_timeout": 0, 00:24:42.951 "avg_latency_us": 32991.43670523837, 00:24:42.951 "min_latency_us": 6240.481122619445, 00:24:42.951 "max_latency_us": 91527.0564650852 00:24:42.951 } 00:24:42.951 ], 00:24:42.951 "core_count": 1 00:24:42.951 } 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:42.951 nvmf_trace.0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1910143 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1910143 ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1910143 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1910143 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1910143' 00:24:42.951 killing process with pid 1910143 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1910143 00:24:42.951 Received shutdown signal, test time was about 1.000000 seconds 00:24:42.951 00:24:42.951 Latency(us) 00:24:42.951 [2024-10-09T09:05:02.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.951 [2024-10-09T09:05:02.953Z] =================================================================================================================== 00:24:42.951 [2024-10-09T09:05:02.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1910143 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.951 rmmod nvme_tcp 00:24:42.951 rmmod nvme_fabrics 00:24:42.951 rmmod nvme_keyring 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1909843 ']' 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1909843 00:24:42.951 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1909843 ']' 00:24:42.952 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1909843 00:24:42.952 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:42.952 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:42.952 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1909843 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1909843' 00:24:43.212 killing process with pid 1909843 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1909843 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1909843 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.212 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.g8EWQ5vxcm /tmp/tmp.Z6EraXrKbV /tmp/tmp.6zbeuGWMSQ 00:24:45.757 00:24:45.757 real 1m28.191s 00:24:45.757 user 2m17.927s 00:24:45.757 sys 0m26.779s 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.757 ************************************ 00:24:45.757 END TEST nvmf_tls 00:24:45.757 ************************************ 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.757 ************************************ 00:24:45.757 START TEST nvmf_fips 00:24:45.757 ************************************ 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:45.757 * Looking for test storage... 00:24:45.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.757 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.758 --rc genhtml_branch_coverage=1 00:24:45.758 --rc genhtml_function_coverage=1 00:24:45.758 --rc genhtml_legend=1 00:24:45.758 --rc geninfo_all_blocks=1 00:24:45.758 --rc geninfo_unexecuted_blocks=1 00:24:45.758 00:24:45.758 ' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.758 --rc genhtml_branch_coverage=1 00:24:45.758 --rc genhtml_function_coverage=1 00:24:45.758 --rc genhtml_legend=1 00:24:45.758 --rc geninfo_all_blocks=1 00:24:45.758 --rc geninfo_unexecuted_blocks=1 00:24:45.758 00:24:45.758 ' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.758 --rc genhtml_branch_coverage=1 00:24:45.758 --rc genhtml_function_coverage=1 00:24:45.758 --rc genhtml_legend=1 00:24:45.758 --rc geninfo_all_blocks=1 00:24:45.758 --rc geninfo_unexecuted_blocks=1 00:24:45.758 00:24:45.758 ' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:45.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.758 --rc genhtml_branch_coverage=1 00:24:45.758 --rc genhtml_function_coverage=1 00:24:45.758 --rc genhtml_legend=1 00:24:45.758 --rc geninfo_all_blocks=1 00:24:45.758 --rc geninfo_unexecuted_blocks=1 00:24:45.758 00:24:45.758 ' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:45.758 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:45.759 Error setting digest 00:24:45.759 4012F167617F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:45.759 4012F167617F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.759 11:05:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:53.905 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:53.905 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:53.905 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:53.906 Found net devices under 0000:31:00.0: cvl_0_0 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:53.906 Found net devices under 0000:31:00.1: cvl_0_1 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.906 11:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:24:53.906 00:24:53.906 --- 10.0.0.2 ping statistics --- 00:24:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.906 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:24:53.906 00:24:53.906 --- 10.0.0.1 ping statistics --- 00:24:53.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.906 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1914955 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1914955 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1914955 ']' 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.906 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:53.906 [2024-10-09 11:05:13.307236] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:53.906 [2024-10-09 11:05:13.307290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.906 [2024-10-09 11:05:13.444248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:53.906 [2024-10-09 11:05:13.492857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.906 [2024-10-09 11:05:13.510971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.906 [2024-10-09 11:05:13.511005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.906 [2024-10-09 11:05:13.511013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.906 [2024-10-09 11:05:13.511019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.906 [2024-10-09 11:05:13.511025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.906 [2024-10-09 11:05:13.511604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.167 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.167 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:54.167 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.AIG 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.AIG 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.AIG 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.AIG 00:24:54.168 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.429 [2024-10-09 11:05:14.318145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.429 [2024-10-09 11:05:14.334114] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.429 [2024-10-09 11:05:14.334440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.429 malloc0 00:24:54.429 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.429 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1915090 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1915090 /var/tmp/bdevperf.sock 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1915090 ']' 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.430 11:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:54.691 [2024-10-09 11:05:14.475180] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:24:54.691 [2024-10-09 11:05:14.475256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915090 ] 00:24:54.691 [2024-10-09 11:05:14.610032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:54.691 [2024-10-09 11:05:14.635328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.691 [2024-10-09 11:05:14.656773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.262 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.262 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:55.262 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.AIG 00:24:55.523 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:55.783 [2024-10-09 11:05:15.584748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.783 TLSTESTn1 00:24:55.783 11:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:55.783 Running I/O for 10 seconds... 00:24:58.108 6028.00 IOPS, 23.55 MiB/s [2024-10-09T09:05:19.053Z] 5999.50 IOPS, 23.44 MiB/s [2024-10-09T09:05:19.995Z] 5816.67 IOPS, 22.72 MiB/s [2024-10-09T09:05:20.935Z] 5632.50 IOPS, 22.00 MiB/s [2024-10-09T09:05:21.876Z] 5620.80 IOPS, 21.96 MiB/s [2024-10-09T09:05:22.818Z] 5721.50 IOPS, 22.35 MiB/s [2024-10-09T09:05:24.200Z] 5618.86 IOPS, 21.95 MiB/s [2024-10-09T09:05:24.770Z] 5593.00 IOPS, 21.85 MiB/s [2024-10-09T09:05:26.160Z] 5658.11 IOPS, 22.10 MiB/s [2024-10-09T09:05:26.160Z] 5716.60 IOPS, 22.33 MiB/s 00:25:06.158 Latency(us) 00:25:06.158 [2024-10-09T09:05:26.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.158 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.158 Verification LBA range: start 0x0 length 0x2000 00:25:06.158 TLSTESTn1 : 10.04 5704.03 22.28 0.00 0.00 22381.12 4735.10 43354.92 00:25:06.158 [2024-10-09T09:05:26.160Z] =================================================================================================================== 00:25:06.158 [2024-10-09T09:05:26.160Z] Total : 5704.03 22.28 0.00 0.00 22381.12 4735.10 43354.92 00:25:06.158 { 00:25:06.158 "results": [ 00:25:06.158 { 00:25:06.158 "job": "TLSTESTn1", 00:25:06.158 "core_mask": "0x4", 00:25:06.158 "workload": "verify", 00:25:06.158 "status": "finished", 00:25:06.158 "verify_range": { 00:25:06.158 "start": 0, 00:25:06.158 "length": 8192 00:25:06.158 }, 00:25:06.158 "queue_depth": 128, 00:25:06.158 "io_size": 4096, 00:25:06.158 "runtime": 10.044481, 00:25:06.158 "iops": 5704.027913438235, 00:25:06.158 "mibps": 22.281359036868107, 00:25:06.158 "io_failed": 0, 00:25:06.158 "io_timeout": 0, 00:25:06.158 "avg_latency_us": 22381.116378985134, 00:25:06.158 "min_latency_us": 4735.101904443702, 00:25:06.158 "max_latency_us": 43354.92148346141 00:25:06.158 } 00:25:06.158 ], 00:25:06.158 "core_count": 1 00:25:06.158 } 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:06.158 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:06.158 nvmf_trace.0 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1915090 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1915090 ']' 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1915090 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1915090 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:06.159 11:05:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1915090' 00:25:06.159 killing process with pid 1915090 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1915090 00:25:06.159 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.159 00:25:06.159 Latency(us) 00:25:06.159 [2024-10-09T09:05:26.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.159 [2024-10-09T09:05:26.161Z] =================================================================================================================== 00:25:06.159 [2024-10-09T09:05:26.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1915090 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.159 rmmod nvme_tcp 00:25:06.159 rmmod nvme_fabrics 00:25:06.159 rmmod nvme_keyring 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1914955 ']' 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1914955 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1914955 ']' 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1914955 00:25:06.159 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1914955 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1914955' 00:25:06.418 killing process with pid 1914955 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1914955 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1914955 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:06.418 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.419 11:05:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.AIG 00:25:08.960 00:25:08.960 real 0m23.127s 00:25:08.960 user 0m24.680s 00:25:08.960 sys 0m9.458s 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:08.960 ************************************ 00:25:08.960 END TEST nvmf_fips 00:25:08.960 ************************************ 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:08.960 ************************************ 00:25:08.960 START TEST nvmf_control_msg_list 00:25:08.960 ************************************ 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:08.960 * Looking for test storage... 00:25:08.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:08.960 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.961 --rc geninfo_all_blocks=1 00:25:08.961 --rc geninfo_unexecuted_blocks=1 00:25:08.961 00:25:08.961 ' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.961 --rc geninfo_all_blocks=1 00:25:08.961 --rc geninfo_unexecuted_blocks=1 00:25:08.961 00:25:08.961 ' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.961 --rc geninfo_all_blocks=1 00:25:08.961 --rc geninfo_unexecuted_blocks=1 00:25:08.961 00:25:08.961 ' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:08.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.961 --rc genhtml_branch_coverage=1 00:25:08.961 --rc genhtml_function_coverage=1 00:25:08.961 --rc genhtml_legend=1 00:25:08.961 --rc geninfo_all_blocks=1 00:25:08.961 --rc geninfo_unexecuted_blocks=1 00:25:08.961 00:25:08.961 ' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:08.961 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.962 11:05:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:17.098 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:17.098 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:17.098 Found net devices under 0000:31:00.0: cvl_0_0 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:17.098 Found net devices under 0000:31:00.1: cvl_0_1 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.098 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.099 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.099 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.099 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.099 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.099 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:25:17.099 00:25:17.099 --- 10.0.0.2 ping statistics --- 00:25:17.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.099 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:17.099 00:25:17.099 --- 10.0.0.1 ping statistics --- 00:25:17.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.099 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1921725 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1921725 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1921725 ']' 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.099 11:05:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.099 [2024-10-09 11:05:36.298413] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:25:17.099 [2024-10-09 11:05:36.298489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.099 [2024-10-09 11:05:36.440038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:17.099 [2024-10-09 11:05:36.472767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.099 [2024-10-09 11:05:36.494248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.099 [2024-10-09 11:05:36.494290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.099 [2024-10-09 11:05:36.494298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.099 [2024-10-09 11:05:36.494304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.099 [2024-10-09 11:05:36.494310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.099 [2024-10-09 11:05:36.495002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.099 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.099 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:25:17.099 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:17.099 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:17.099 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 [2024-10-09 11:05:37.142453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 Malloc0 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 [2024-10-09 11:05:37.193213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1921794 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1921796 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1921797 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1921794 00:25:17.359 11:05:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:17.619 [2024-10-09 11:05:37.363412] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:17.619 [2024-10-09 11:05:37.373582] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:17.619 [2024-10-09 11:05:37.373924] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:18.558 Initializing NVMe Controllers 00:25:18.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:18.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:18.559 Initialization complete. Launching workers. 00:25:18.559 ======================================================== 00:25:18.559 Latency(us) 00:25:18.559 Device Information : IOPS MiB/s Average min max 00:25:18.559 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40997.63 40832.43 41188.45 00:25:18.559 ======================================================== 00:25:18.559 Total : 25.00 0.10 40997.63 40832.43 41188.45 00:25:18.559 00:25:18.559 Initializing NVMe Controllers 00:25:18.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:18.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:18.559 Initialization complete. Launching workers. 00:25:18.559 ======================================================== 00:25:18.559 Latency(us) 00:25:18.559 Device Information : IOPS MiB/s Average min max 00:25:18.559 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2525.00 9.86 395.97 132.99 666.40 00:25:18.559 ======================================================== 00:25:18.559 Total : 2525.00 9.86 395.97 132.99 666.40 00:25:18.559 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1921796 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1921797 00:25:18.559 Initializing NVMe Controllers 00:25:18.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:18.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:18.559 Initialization complete. Launching workers. 00:25:18.559 ======================================================== 00:25:18.559 Latency(us) 00:25:18.559 Device Information : IOPS MiB/s Average min max 00:25:18.559 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1556.00 6.08 642.55 310.00 820.67 00:25:18.559 ======================================================== 00:25:18.559 Total : 1556.00 6.08 642.55 310.00 820.67 00:25:18.559 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.559 rmmod nvme_tcp 00:25:18.559 rmmod nvme_fabrics 00:25:18.559 rmmod nvme_keyring 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1921725 ']' 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1921725 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1921725 ']' 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1921725 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:18.559 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1921725 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1921725' 00:25:18.820 killing process with pid 1921725 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1921725 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1921725 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.820 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.363 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.363 00:25:21.363 real 0m12.292s 00:25:21.363 user 0m7.652s 00:25:21.363 sys 0m6.450s 00:25:21.363 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.363 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:21.363 ************************************ 00:25:21.364 END TEST nvmf_control_msg_list 00:25:21.364 ************************************ 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:21.364 ************************************ 00:25:21.364 START TEST nvmf_wait_for_buf 00:25:21.364 ************************************ 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:21.364 * Looking for test storage... 00:25:21.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:21.364 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.364 --rc genhtml_branch_coverage=1 00:25:21.364 --rc genhtml_function_coverage=1 00:25:21.364 --rc genhtml_legend=1 00:25:21.364 --rc geninfo_all_blocks=1 00:25:21.364 --rc geninfo_unexecuted_blocks=1 00:25:21.364 00:25:21.364 ' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.364 --rc genhtml_branch_coverage=1 00:25:21.364 --rc genhtml_function_coverage=1 00:25:21.364 --rc genhtml_legend=1 00:25:21.364 --rc geninfo_all_blocks=1 00:25:21.364 --rc geninfo_unexecuted_blocks=1 00:25:21.364 00:25:21.364 ' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.364 --rc genhtml_branch_coverage=1 00:25:21.364 --rc genhtml_function_coverage=1 00:25:21.364 --rc genhtml_legend=1 00:25:21.364 --rc geninfo_all_blocks=1 00:25:21.364 --rc geninfo_unexecuted_blocks=1 00:25:21.364 00:25:21.364 ' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.364 --rc genhtml_branch_coverage=1 00:25:21.364 --rc genhtml_function_coverage=1 00:25:21.364 --rc genhtml_legend=1 00:25:21.364 --rc geninfo_all_blocks=1 00:25:21.364 --rc geninfo_unexecuted_blocks=1 00:25:21.364 00:25:21.364 ' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.364 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.365 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:29.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:29.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:29.500 Found net devices under 0000:31:00.0: cvl_0_0 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:29.500 Found net devices under 0000:31:00.1: cvl_0_1 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.500 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:25:29.501 00:25:29.501 --- 10.0.0.2 ping statistics --- 00:25:29.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.501 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:25:29.501 00:25:29.501 --- 10.0.0.1 ping statistics --- 00:25:29.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.501 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1926480 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1926480 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1926480 ']' 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.501 11:05:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.501 [2024-10-09 11:05:48.715070] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:25:29.501 [2024-10-09 11:05:48.715118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.501 [2024-10-09 11:05:48.855019] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:29.501 [2024-10-09 11:05:48.886303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.501 [2024-10-09 11:05:48.902688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.501 [2024-10-09 11:05:48.902720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.501 [2024-10-09 11:05:48.902727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.501 [2024-10-09 11:05:48.902734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.501 [2024-10-09 11:05:48.902740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.501 [2024-10-09 11:05:48.903320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 Malloc0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 [2024-10-09 11:05:49.713827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:29.761 [2024-10-09 11:05:49.749956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.761 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.021 [2024-10-09 11:05:49.932550] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:31.448 Initializing NVMe Controllers 00:25:31.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:31.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:31.448 Initialization complete. Launching workers. 00:25:31.448 ======================================================== 00:25:31.448 Latency(us) 00:25:31.448 Device Information : IOPS MiB/s Average min max 00:25:31.448 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32339.98 8020.20 64001.59 00:25:31.448 ======================================================== 00:25:31.448 Total : 129.00 16.12 32339.98 8020.20 64001.59 00:25:31.448 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.448 rmmod nvme_tcp 00:25:31.448 rmmod nvme_fabrics 00:25:31.448 rmmod nvme_keyring 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1926480 ']' 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1926480 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1926480 ']' 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1926480 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.448 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1926480 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1926480' 00:25:31.728 killing process with pid 1926480 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1926480 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1926480 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.728 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:33.773 00:25:33.773 real 0m12.760s 00:25:33.773 user 0m5.123s 00:25:33.773 sys 0m6.141s 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.773 ************************************ 00:25:33.773 END TEST nvmf_wait_for_buf 00:25:33.773 ************************************ 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:33.773 ************************************ 00:25:33.773 START TEST nvmf_fuzz 00:25:33.773 ************************************ 00:25:33.773 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:34.034 * Looking for test storage... 00:25:34.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.034 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.034 --rc genhtml_branch_coverage=1 00:25:34.034 --rc genhtml_function_coverage=1 00:25:34.034 --rc genhtml_legend=1 00:25:34.034 --rc geninfo_all_blocks=1 00:25:34.034 --rc geninfo_unexecuted_blocks=1 00:25:34.035 00:25:34.035 ' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.035 --rc genhtml_branch_coverage=1 00:25:34.035 --rc genhtml_function_coverage=1 00:25:34.035 --rc genhtml_legend=1 00:25:34.035 --rc geninfo_all_blocks=1 00:25:34.035 --rc geninfo_unexecuted_blocks=1 00:25:34.035 00:25:34.035 ' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.035 --rc genhtml_branch_coverage=1 00:25:34.035 --rc genhtml_function_coverage=1 00:25:34.035 --rc genhtml_legend=1 00:25:34.035 --rc geninfo_all_blocks=1 00:25:34.035 --rc geninfo_unexecuted_blocks=1 00:25:34.035 00:25:34.035 ' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:34.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.035 --rc genhtml_branch_coverage=1 00:25:34.035 --rc genhtml_function_coverage=1 00:25:34.035 --rc genhtml_legend=1 00:25:34.035 --rc geninfo_all_blocks=1 00:25:34.035 --rc geninfo_unexecuted_blocks=1 00:25:34.035 00:25:34.035 ' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.035 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:42.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:42.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:42.175 Found net devices under 0000:31:00.0: cvl_0_0 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:42.175 Found net devices under 0000:31:00.1: cvl_0_1 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.175 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:25:42.176 00:25:42.176 --- 10.0.0.2 ping statistics --- 00:25:42.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.176 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:25:42.176 00:25:42.176 --- 10.0.0.1 ping statistics --- 00:25:42.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.176 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1931325 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1931325 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1931325 ']' 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.176 11:06:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 Malloc0 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:42.746 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:14.881 Fuzzing completed. Shutting down the fuzz application 00:26:14.881 00:26:14.881 Dumping successful admin opcodes: 00:26:14.881 8, 9, 10, 24, 00:26:14.881 Dumping successful io opcodes: 00:26:14.881 0, 9, 00:26:14.881 NS: 0x2000008eff00 I/O qp, Total commands completed: 901975, total successful commands: 5253, random_seed: 478577152 00:26:14.881 NS: 0x2000008eff00 admin qp, Total commands completed: 113786, total successful commands: 929, random_seed: 817479744 00:26:14.881 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:14.881 Fuzzing completed. Shutting down the fuzz application 00:26:14.881 00:26:14.881 Dumping successful admin opcodes: 00:26:14.881 24, 00:26:14.881 Dumping successful io opcodes: 00:26:14.881 00:26:14.881 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2404650369 00:26:14.881 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2404722569 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.881 rmmod nvme_tcp 00:26:14.881 rmmod nvme_fabrics 00:26:14.881 rmmod nvme_keyring 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 1931325 ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1931325 ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931325' 00:26:14.881 killing process with pid 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1931325 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.881 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:16.791 00:26:16.791 real 0m42.837s 00:26:16.791 user 0m56.029s 00:26:16.791 sys 0m15.705s 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:16.791 ************************************ 00:26:16.791 END TEST nvmf_fuzz 00:26:16.791 ************************************ 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:16.791 ************************************ 00:26:16.791 START TEST nvmf_multiconnection 00:26:16.791 ************************************ 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:16.791 * Looking for test storage... 00:26:16.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:26:16.791 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:17.052 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.053 --rc genhtml_branch_coverage=1 00:26:17.053 --rc genhtml_function_coverage=1 00:26:17.053 --rc genhtml_legend=1 00:26:17.053 --rc geninfo_all_blocks=1 00:26:17.053 --rc geninfo_unexecuted_blocks=1 00:26:17.053 00:26:17.053 ' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.053 --rc genhtml_branch_coverage=1 00:26:17.053 --rc genhtml_function_coverage=1 00:26:17.053 --rc genhtml_legend=1 00:26:17.053 --rc geninfo_all_blocks=1 00:26:17.053 --rc geninfo_unexecuted_blocks=1 00:26:17.053 00:26:17.053 ' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.053 --rc genhtml_branch_coverage=1 00:26:17.053 --rc genhtml_function_coverage=1 00:26:17.053 --rc genhtml_legend=1 00:26:17.053 --rc geninfo_all_blocks=1 00:26:17.053 --rc geninfo_unexecuted_blocks=1 00:26:17.053 00:26:17.053 ' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.053 --rc genhtml_branch_coverage=1 00:26:17.053 --rc genhtml_function_coverage=1 00:26:17.053 --rc genhtml_legend=1 00:26:17.053 --rc geninfo_all_blocks=1 00:26:17.053 --rc geninfo_unexecuted_blocks=1 00:26:17.053 00:26:17.053 ' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.053 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.190 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:25.191 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:25.191 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:25.191 Found net devices under 0000:31:00.0: cvl_0_0 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:25.191 Found net devices under 0000:31:00.1: cvl_0_1 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.191 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:26:25.191 00:26:25.191 --- 10.0.0.2 ping statistics --- 00:26:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.191 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:26:25.191 00:26:25.191 --- 10.0.0.1 ping statistics --- 00:26:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.191 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=1942217 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 1942217 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1942217 ']' 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.191 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.192 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.192 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.192 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.192 [2024-10-09 11:06:44.409250] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:26:25.192 [2024-10-09 11:06:44.409314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.192 [2024-10-09 11:06:44.550747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:25.192 [2024-10-09 11:06:44.582132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.192 [2024-10-09 11:06:44.602255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.192 [2024-10-09 11:06:44.602288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.192 [2024-10-09 11:06:44.602297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.192 [2024-10-09 11:06:44.602303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.192 [2024-10-09 11:06:44.602309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.192 [2024-10-09 11:06:44.603856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.192 [2024-10-09 11:06:44.603971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.192 [2024-10-09 11:06:44.604125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.192 [2024-10-09 11:06:44.604125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 [2024-10-09 11:06:45.269884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 Malloc1 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 [2024-10-09 11:06:45.346460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 Malloc2 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 Malloc3 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.452 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.712 Malloc4 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.712 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 Malloc5 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 Malloc6 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 Malloc7 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.713 Malloc8 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.713 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 Malloc9 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 Malloc10 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 Malloc11 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.973 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:27.882 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:27.882 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:27.882 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:27.882 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:27.882 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.793 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:31.175 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:31.175 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:31.175 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:31.175 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:31.175 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:33.088 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:33.088 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:33.088 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:33.088 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:33.088 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.088 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:33.088 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.088 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:35.000 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:35.000 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:35.000 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.000 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:35.000 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:36.913 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:38.825 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:38.825 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:38.825 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.825 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:38.825 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.735 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:42.117 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:42.117 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:42.117 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.117 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:42.117 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.657 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:46.096 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:46.096 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:46.096 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.096 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:46.096 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.055 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:49.967 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:49.967 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:49.967 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.967 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:49.967 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.879 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:53.262 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:53.262 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:53.262 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.262 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:53.262 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:55.174 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:55.174 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.434 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:57.344 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:57.344 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:57.344 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.344 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:57.344 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.255 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:01.168 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:01.168 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:01.168 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.168 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:01.168 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.081 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:04.994 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:04.994 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:27:04.994 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.994 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:04.994 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:27:06.905 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:06.905 [global] 00:27:06.905 thread=1 00:27:06.905 invalidate=1 00:27:06.905 rw=read 00:27:06.905 time_based=1 00:27:06.905 runtime=10 00:27:06.905 ioengine=libaio 00:27:06.905 direct=1 00:27:06.905 bs=262144 00:27:06.905 iodepth=64 00:27:06.905 norandommap=1 00:27:06.905 numjobs=1 00:27:06.905 00:27:06.905 [job0] 00:27:06.905 filename=/dev/nvme0n1 00:27:06.905 [job1] 00:27:06.905 filename=/dev/nvme10n1 00:27:06.905 [job2] 00:27:06.905 filename=/dev/nvme1n1 00:27:06.905 [job3] 00:27:06.905 filename=/dev/nvme2n1 00:27:06.905 [job4] 00:27:06.905 filename=/dev/nvme3n1 00:27:06.905 [job5] 00:27:06.905 filename=/dev/nvme4n1 00:27:06.905 [job6] 00:27:06.905 filename=/dev/nvme5n1 00:27:06.905 [job7] 00:27:06.905 filename=/dev/nvme6n1 00:27:06.905 [job8] 00:27:06.905 filename=/dev/nvme7n1 00:27:06.905 [job9] 00:27:06.905 filename=/dev/nvme8n1 00:27:06.905 [job10] 00:27:06.905 filename=/dev/nvme9n1 00:27:06.905 Could not set queue depth (nvme0n1) 00:27:06.905 Could not set queue depth (nvme10n1) 00:27:06.905 Could not set queue depth (nvme1n1) 00:27:06.906 Could not set queue depth (nvme2n1) 00:27:06.906 Could not set queue depth (nvme3n1) 00:27:06.906 Could not set queue depth (nvme4n1) 00:27:06.906 Could not set queue depth (nvme5n1) 00:27:06.906 Could not set queue depth (nvme6n1) 00:27:06.906 Could not set queue depth (nvme7n1) 00:27:06.906 Could not set queue depth (nvme8n1) 00:27:06.906 Could not set queue depth (nvme9n1) 00:27:07.166 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:07.166 fio-3.35 00:27:07.166 Starting 11 threads 00:27:19.392 00:27:19.392 job0: (groupid=0, jobs=1): err= 0: pid=1950666: Wed Oct 9 11:07:37 2024 00:27:19.392 read: IOPS=128, BW=32.2MiB/s (33.7MB/s)(327MiB/10174msec) 00:27:19.392 slat (usec): min=11, max=274563, avg=6558.23, stdev=26753.98 00:27:19.392 clat (msec): min=15, max=1311, avg=490.13, stdev=370.33 00:27:19.392 lat (msec): min=16, max=1345, avg=496.69, stdev=374.82 00:27:19.392 clat percentiles (msec): 00:27:19.392 | 1.00th=[ 55], 5.00th=[ 72], 10.00th=[ 80], 20.00th=[ 124], 00:27:19.392 | 30.00th=[ 150], 40.00th=[ 207], 50.00th=[ 451], 60.00th=[ 609], 00:27:19.392 | 70.00th=[ 802], 80.00th=[ 894], 90.00th=[ 1020], 95.00th=[ 1083], 00:27:19.392 | 99.00th=[ 1234], 99.50th=[ 1284], 99.90th=[ 1318], 99.95th=[ 1318], 00:27:19.392 | 99.99th=[ 1318] 00:27:19.392 bw ( KiB/s): min=11264, max=135680, per=5.12%, avg=31872.00, stdev=34394.93, samples=20 00:27:19.392 iops : min= 44, max= 530, avg=124.50, stdev=134.36, samples=20 00:27:19.392 lat (msec) : 20=0.38%, 50=0.53%, 100=14.21%, 250=27.81%, 500=11.46% 00:27:19.392 lat (msec) : 750=13.67%, 1000=20.40%, 2000=11.54% 00:27:19.392 cpu : usr=0.01%, sys=0.51%, ctx=216, majf=0, minf=4097 00:27:19.392 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:27:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.392 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.392 issued rwts: total=1309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.392 job1: (groupid=0, jobs=1): err= 0: pid=1950686: Wed Oct 9 11:07:37 2024 00:27:19.392 read: IOPS=128, BW=32.1MiB/s (33.6MB/s)(323MiB/10076msec) 00:27:19.392 slat (usec): min=13, max=555909, avg=5973.65, stdev=31998.36 00:27:19.392 clat (msec): min=2, max=1311, avg=492.28, stdev=354.34 00:27:19.392 lat (msec): min=2, max=1418, avg=498.25, stdev=358.47 00:27:19.392 clat percentiles (msec): 00:27:19.392 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 120], 20.00th=[ 180], 00:27:19.392 | 30.00th=[ 226], 40.00th=[ 338], 50.00th=[ 401], 60.00th=[ 472], 00:27:19.392 | 70.00th=[ 651], 80.00th=[ 877], 90.00th=[ 1083], 95.00th=[ 1167], 00:27:19.392 | 99.00th=[ 1234], 99.50th=[ 1234], 99.90th=[ 1318], 99.95th=[ 1318], 00:27:19.392 | 99.99th=[ 1318] 00:27:19.392 bw ( KiB/s): min= 6656, max=71680, per=5.05%, avg=31462.40, stdev=19742.14, samples=20 00:27:19.392 iops : min= 26, max= 280, avg=122.90, stdev=77.12, samples=20 00:27:19.392 lat (msec) : 4=1.55%, 10=0.77%, 20=1.24%, 50=2.47%, 100=1.93% 00:27:19.392 lat (msec) : 250=26.91%, 500=28.46%, 750=9.36%, 1000=13.92%, 2000=13.38% 00:27:19.392 cpu : usr=0.03%, sys=0.58%, ctx=316, majf=0, minf=4097 00:27:19.392 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:19.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.392 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.392 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.392 job2: (groupid=0, jobs=1): err= 0: pid=1950707: Wed Oct 9 11:07:37 2024 00:27:19.392 read: IOPS=242, BW=60.5MiB/s (63.5MB/s)(609MiB/10065msec) 00:27:19.392 slat (usec): min=11, max=243119, avg=3128.47, stdev=15483.21 00:27:19.392 clat (msec): min=2, max=1126, avg=260.84, stdev=306.25 00:27:19.392 lat (msec): min=2, max=1126, avg=263.97, stdev=309.99 00:27:19.392 clat percentiles (msec): 00:27:19.392 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 46], 00:27:19.392 | 30.00th=[ 61], 40.00th=[ 87], 50.00th=[ 93], 60.00th=[ 134], 00:27:19.393 | 70.00th=[ 292], 80.00th=[ 510], 90.00th=[ 835], 95.00th=[ 927], 00:27:19.393 | 99.00th=[ 1053], 99.50th=[ 1083], 99.90th=[ 1133], 99.95th=[ 1133], 00:27:19.393 | 99.99th=[ 1133] 00:27:19.393 bw ( KiB/s): min=14848, max=224768, per=9.75%, avg=60774.40, stdev=65482.31, samples=20 00:27:19.393 iops : min= 58, max= 878, avg=237.40, stdev=255.79, samples=20 00:27:19.393 lat (msec) : 4=0.12%, 10=1.48%, 20=3.32%, 50=19.33%, 100=30.41% 00:27:19.393 lat (msec) : 250=14.12%, 500=11.16%, 750=5.66%, 1000=12.02%, 2000=2.38% 00:27:19.393 cpu : usr=0.09%, sys=1.01%, ctx=1048, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job3: (groupid=0, jobs=1): err= 0: pid=1950719: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=170, BW=42.6MiB/s (44.7MB/s)(433MiB/10174msec) 00:27:19.393 slat (usec): min=13, max=1167.6k, avg=4426.27, stdev=33655.11 00:27:19.393 clat (usec): min=1879, max=2078.8k, avg=370807.44, stdev=358672.28 00:27:19.393 lat (usec): min=1930, max=2078.8k, avg=375233.71, stdev=362150.21 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 125], 20.00th=[ 159], 00:27:19.393 | 30.00th=[ 182], 40.00th=[ 205], 50.00th=[ 224], 60.00th=[ 255], 00:27:19.393 | 70.00th=[ 405], 80.00th=[ 535], 90.00th=[ 827], 95.00th=[ 1318], 00:27:19.393 | 99.00th=[ 1737], 99.50th=[ 1770], 99.90th=[ 1770], 99.95th=[ 2072], 00:27:19.393 | 99.99th=[ 2072] 00:27:19.393 bw ( KiB/s): min= 3072, max=106496, per=7.22%, avg=44975.16, stdev=28468.24, samples=19 00:27:19.393 iops : min= 12, max= 416, avg=175.68, stdev=111.20, samples=19 00:27:19.393 lat (msec) : 2=0.06%, 4=2.02%, 10=0.81%, 20=1.27%, 50=2.31% 00:27:19.393 lat (msec) : 100=1.67%, 250=50.20%, 500=20.08%, 750=10.04%, 1000=4.21% 00:27:19.393 lat (msec) : 2000=7.27%, >=2000=0.06% 00:27:19.393 cpu : usr=0.07%, sys=0.67%, ctx=401, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=1733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job4: (groupid=0, jobs=1): err= 0: pid=1950727: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=170, BW=42.6MiB/s (44.7MB/s)(435MiB/10209msec) 00:27:19.393 slat (usec): min=10, max=643914, avg=4644.45, stdev=26881.58 00:27:19.393 clat (msec): min=15, max=1449, avg=370.14, stdev=319.84 00:27:19.393 lat (msec): min=17, max=1449, avg=374.79, stdev=323.76 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 51], 5.00th=[ 80], 10.00th=[ 95], 20.00th=[ 125], 00:27:19.393 | 30.00th=[ 157], 40.00th=[ 199], 50.00th=[ 232], 60.00th=[ 321], 00:27:19.393 | 70.00th=[ 426], 80.00th=[ 527], 90.00th=[ 961], 95.00th=[ 1133], 00:27:19.393 | 99.00th=[ 1234], 99.50th=[ 1284], 99.90th=[ 1452], 99.95th=[ 1452], 00:27:19.393 | 99.99th=[ 1452] 00:27:19.393 bw ( KiB/s): min= 8192, max=136704, per=6.89%, avg=42931.20, stdev=36075.32, samples=20 00:27:19.393 iops : min= 32, max= 534, avg=167.70, stdev=140.92, samples=20 00:27:19.393 lat (msec) : 20=0.29%, 50=0.57%, 100=9.94%, 250=41.64%, 500=24.76% 00:27:19.393 lat (msec) : 750=8.16%, 1000=6.32%, 2000=8.33% 00:27:19.393 cpu : usr=0.01%, sys=0.67%, ctx=320, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=1741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job5: (groupid=0, jobs=1): err= 0: pid=1950759: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=310, BW=77.5MiB/s (81.3MB/s)(779MiB/10048msec) 00:27:19.393 slat (usec): min=10, max=854857, avg=2732.21, stdev=20665.70 00:27:19.393 clat (usec): min=1313, max=1940.6k, avg=203427.23, stdev=297310.51 00:27:19.393 lat (usec): min=1362, max=1940.6k, avg=206159.44, stdev=300255.47 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 45], 20.00th=[ 66], 00:27:19.393 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 129], 00:27:19.393 | 70.00th=[ 205], 80.00th=[ 268], 90.00th=[ 380], 95.00th=[ 902], 00:27:19.393 | 99.00th=[ 1754], 99.50th=[ 1754], 99.90th=[ 1770], 99.95th=[ 1770], 00:27:19.393 | 99.99th=[ 1938] 00:27:19.393 bw ( KiB/s): min= 2048, max=260096, per=12.54%, avg=78131.20, stdev=79970.30, samples=20 00:27:19.393 iops : min= 8, max= 1016, avg=305.20, stdev=312.38, samples=20 00:27:19.393 lat (msec) : 2=0.51%, 4=0.39%, 10=1.54%, 20=1.99%, 50=6.81% 00:27:19.393 lat (msec) : 100=46.71%, 250=18.94%, 500=15.73%, 750=0.64%, 1000=2.79% 00:27:19.393 lat (msec) : 2000=3.95% 00:27:19.393 cpu : usr=0.11%, sys=1.04%, ctx=780, majf=0, minf=3534 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=3115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job6: (groupid=0, jobs=1): err= 0: pid=1950771: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=186, BW=46.7MiB/s (49.0MB/s)(475MiB/10170msec) 00:27:19.393 slat (usec): min=10, max=764444, avg=4180.85, stdev=31986.71 00:27:19.393 clat (msec): min=3, max=1647, avg=337.99, stdev=354.83 00:27:19.393 lat (msec): min=3, max=1647, avg=342.17, stdev=358.29 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 19], 5.00th=[ 21], 10.00th=[ 63], 20.00th=[ 95], 00:27:19.393 | 30.00th=[ 113], 40.00th=[ 136], 50.00th=[ 182], 60.00th=[ 232], 00:27:19.393 | 70.00th=[ 380], 80.00th=[ 542], 90.00th=[ 885], 95.00th=[ 1099], 00:27:19.393 | 99.00th=[ 1435], 99.50th=[ 1452], 99.90th=[ 1485], 99.95th=[ 1653], 00:27:19.393 | 99.99th=[ 1653] 00:27:19.393 bw ( KiB/s): min= 2560, max=134656, per=7.54%, avg=47005.55, stdev=39185.94, samples=20 00:27:19.393 iops : min= 10, max= 526, avg=183.60, stdev=153.07, samples=20 00:27:19.393 lat (msec) : 4=0.16%, 10=0.21%, 20=2.21%, 50=6.26%, 100=12.63% 00:27:19.393 lat (msec) : 250=40.58%, 500=15.89%, 750=5.26%, 1000=9.26%, 2000=7.53% 00:27:19.393 cpu : usr=0.07%, sys=0.69%, ctx=451, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job7: (groupid=0, jobs=1): err= 0: pid=1950780: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=389, BW=97.3MiB/s (102MB/s)(990MiB/10172msec) 00:27:19.393 slat (usec): min=8, max=832727, avg=1804.81, stdev=19389.26 00:27:19.393 clat (msec): min=2, max=1645, avg=162.43, stdev=292.63 00:27:19.393 lat (msec): min=2, max=1716, avg=164.23, stdev=295.78 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 31], 00:27:19.393 | 30.00th=[ 35], 40.00th=[ 37], 50.00th=[ 40], 60.00th=[ 42], 00:27:19.393 | 70.00th=[ 44], 80.00th=[ 58], 90.00th=[ 634], 95.00th=[ 818], 00:27:19.393 | 99.00th=[ 1485], 99.50th=[ 1485], 99.90th=[ 1485], 99.95th=[ 1485], 00:27:19.393 | 99.99th=[ 1653] 00:27:19.393 bw ( KiB/s): min= 2560, max=439808, per=16.00%, avg=99712.00, stdev=145168.30, samples=20 00:27:19.393 iops : min= 10, max= 1718, avg=389.50, stdev=567.06, samples=20 00:27:19.393 lat (msec) : 4=0.13%, 10=4.29%, 20=5.10%, 50=68.40%, 100=2.85% 00:27:19.393 lat (msec) : 250=0.23%, 500=4.60%, 750=8.69%, 1000=3.06%, 2000=2.65% 00:27:19.393 cpu : usr=0.23%, sys=1.26%, ctx=972, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=3959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job8: (groupid=0, jobs=1): err= 0: pid=1950811: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=204, BW=51.2MiB/s (53.7MB/s)(522MiB/10195msec) 00:27:19.393 slat (usec): min=7, max=951705, avg=3960.27, stdev=30422.48 00:27:19.393 clat (msec): min=19, max=1911, avg=308.09, stdev=347.61 00:27:19.393 lat (msec): min=19, max=1992, avg=312.05, stdev=350.84 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 53], 20.00th=[ 96], 00:27:19.393 | 30.00th=[ 130], 40.00th=[ 190], 50.00th=[ 224], 60.00th=[ 249], 00:27:19.393 | 70.00th=[ 288], 80.00th=[ 372], 90.00th=[ 642], 95.00th=[ 1053], 00:27:19.393 | 99.00th=[ 1703], 99.50th=[ 1905], 99.90th=[ 1905], 99.95th=[ 1905], 00:27:19.393 | 99.99th=[ 1905] 00:27:19.393 bw ( KiB/s): min= 3072, max=189440, per=8.76%, avg=54568.42, stdev=42991.51, samples=19 00:27:19.393 iops : min= 12, max= 740, avg=213.16, stdev=167.94, samples=19 00:27:19.393 lat (msec) : 20=0.34%, 50=9.43%, 100=10.87%, 250=39.44%, 500=26.33% 00:27:19.393 lat (msec) : 750=4.98%, 1000=3.49%, 2000=5.12% 00:27:19.393 cpu : usr=0.04%, sys=0.82%, ctx=386, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:19.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.393 issued rwts: total=2089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.393 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.393 job9: (groupid=0, jobs=1): err= 0: pid=1950812: Wed Oct 9 11:07:37 2024 00:27:19.393 read: IOPS=128, BW=32.0MiB/s (33.6MB/s)(323MiB/10079msec) 00:27:19.393 slat (usec): min=13, max=269011, avg=7085.78, stdev=24535.58 00:27:19.393 clat (msec): min=33, max=1219, avg=491.44, stdev=262.46 00:27:19.393 lat (msec): min=33, max=1220, avg=498.53, stdev=267.32 00:27:19.393 clat percentiles (msec): 00:27:19.393 | 1.00th=[ 51], 5.00th=[ 129], 10.00th=[ 144], 20.00th=[ 245], 00:27:19.393 | 30.00th=[ 305], 40.00th=[ 376], 50.00th=[ 456], 60.00th=[ 542], 00:27:19.393 | 70.00th=[ 617], 80.00th=[ 785], 90.00th=[ 894], 95.00th=[ 953], 00:27:19.393 | 99.00th=[ 1003], 99.50th=[ 1020], 99.90th=[ 1217], 99.95th=[ 1217], 00:27:19.393 | 99.99th=[ 1217] 00:27:19.393 bw ( KiB/s): min=12800, max=70656, per=5.05%, avg=31466.05, stdev=15345.61, samples=20 00:27:19.393 iops : min= 50, max= 276, avg=122.90, stdev=59.94, samples=20 00:27:19.393 lat (msec) : 50=1.01%, 100=2.32%, 250=18.03%, 500=34.75%, 750=21.67% 00:27:19.393 lat (msec) : 1000=21.21%, 2000=1.01% 00:27:19.393 cpu : usr=0.06%, sys=0.54%, ctx=254, majf=0, minf=4097 00:27:19.393 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.394 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.394 issued rwts: total=1292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.394 job10: (groupid=0, jobs=1): err= 0: pid=1950822: Wed Oct 9 11:07:37 2024 00:27:19.394 read: IOPS=394, BW=98.6MiB/s (103MB/s)(994MiB/10081msec) 00:27:19.394 slat (usec): min=6, max=264288, avg=2341.40, stdev=10281.25 00:27:19.394 clat (usec): min=922, max=923894, avg=159644.79, stdev=160934.48 00:27:19.394 lat (usec): min=970, max=923930, avg=161986.19, stdev=163142.94 00:27:19.394 clat percentiles (msec): 00:27:19.394 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 33], 00:27:19.394 | 30.00th=[ 51], 40.00th=[ 84], 50.00th=[ 100], 60.00th=[ 132], 00:27:19.394 | 70.00th=[ 184], 80.00th=[ 249], 90.00th=[ 405], 95.00th=[ 510], 00:27:19.394 | 99.00th=[ 776], 99.50th=[ 818], 99.90th=[ 927], 99.95th=[ 927], 00:27:19.394 | 99.99th=[ 927] 00:27:19.394 bw ( KiB/s): min=20992, max=488960, per=16.08%, avg=100172.80, stdev=107927.96, samples=20 00:27:19.394 iops : min= 82, max= 1910, avg=391.30, stdev=421.59, samples=20 00:27:19.394 lat (usec) : 1000=0.03% 00:27:19.394 lat (msec) : 2=0.20%, 4=0.73%, 10=2.09%, 20=0.33%, 50=26.58% 00:27:19.394 lat (msec) : 100=21.35%, 250=28.94%, 500=14.53%, 750=3.77%, 1000=1.46% 00:27:19.394 cpu : usr=0.09%, sys=1.32%, ctx=793, majf=0, minf=4097 00:27:19.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:19.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:19.394 issued rwts: total=3977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.394 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:19.394 00:27:19.394 Run status group 0 (all jobs): 00:27:19.394 READ: bw=608MiB/s (638MB/s), 32.0MiB/s-98.6MiB/s (33.6MB/s-103MB/s), io=6211MiB (6513MB), run=10048-10209msec 00:27:19.394 00:27:19.394 Disk stats (read/write): 00:27:19.394 nvme0n1: ios=2502/0, merge=0/0, ticks=1203623/0, in_queue=1203623, util=96.43% 00:27:19.394 nvme10n1: ios=2389/0, merge=0/0, ticks=1223895/0, in_queue=1223895, util=96.57% 00:27:19.394 nvme1n1: ios=4540/0, merge=0/0, ticks=1225471/0, in_queue=1225471, util=97.02% 00:27:19.394 nvme2n1: ios=3344/0, merge=0/0, ticks=1196505/0, in_queue=1196505, util=97.32% 00:27:19.394 nvme3n1: ios=3390/0, merge=0/0, ticks=1218603/0, in_queue=1218603, util=97.40% 00:27:19.394 nvme4n1: ios=5840/0, merge=0/0, ticks=1220918/0, in_queue=1220918, util=97.83% 00:27:19.394 nvme5n1: ios=3678/0, merge=0/0, ticks=1193195/0, in_queue=1193195, util=98.07% 00:27:19.394 nvme6n1: ios=7807/0, merge=0/0, ticks=1208668/0, in_queue=1208668, util=98.22% 00:27:19.394 nvme7n1: ios=4097/0, merge=0/0, ticks=1235191/0, in_queue=1235191, util=98.75% 00:27:19.394 nvme8n1: ios=2364/0, merge=0/0, ticks=1211655/0, in_queue=1211655, util=98.99% 00:27:19.394 nvme9n1: ios=7751/0, merge=0/0, ticks=1215234/0, in_queue=1215234, util=99.23% 00:27:19.394 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:19.394 [global] 00:27:19.394 thread=1 00:27:19.394 invalidate=1 00:27:19.394 rw=randwrite 00:27:19.394 time_based=1 00:27:19.394 runtime=10 00:27:19.394 ioengine=libaio 00:27:19.394 direct=1 00:27:19.394 bs=262144 00:27:19.394 iodepth=64 00:27:19.394 norandommap=1 00:27:19.394 numjobs=1 00:27:19.394 00:27:19.394 [job0] 00:27:19.394 filename=/dev/nvme0n1 00:27:19.394 [job1] 00:27:19.394 filename=/dev/nvme10n1 00:27:19.394 [job2] 00:27:19.394 filename=/dev/nvme1n1 00:27:19.394 [job3] 00:27:19.394 filename=/dev/nvme2n1 00:27:19.394 [job4] 00:27:19.394 filename=/dev/nvme3n1 00:27:19.394 [job5] 00:27:19.394 filename=/dev/nvme4n1 00:27:19.394 [job6] 00:27:19.394 filename=/dev/nvme5n1 00:27:19.394 [job7] 00:27:19.394 filename=/dev/nvme6n1 00:27:19.394 [job8] 00:27:19.394 filename=/dev/nvme7n1 00:27:19.394 [job9] 00:27:19.394 filename=/dev/nvme8n1 00:27:19.394 [job10] 00:27:19.394 filename=/dev/nvme9n1 00:27:19.394 Could not set queue depth (nvme0n1) 00:27:19.394 Could not set queue depth (nvme10n1) 00:27:19.394 Could not set queue depth (nvme1n1) 00:27:19.394 Could not set queue depth (nvme2n1) 00:27:19.394 Could not set queue depth (nvme3n1) 00:27:19.394 Could not set queue depth (nvme4n1) 00:27:19.394 Could not set queue depth (nvme5n1) 00:27:19.394 Could not set queue depth (nvme6n1) 00:27:19.394 Could not set queue depth (nvme7n1) 00:27:19.394 Could not set queue depth (nvme8n1) 00:27:19.394 Could not set queue depth (nvme9n1) 00:27:19.394 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:19.394 fio-3.35 00:27:19.394 Starting 11 threads 00:27:29.399 00:27:29.399 job0: (groupid=0, jobs=1): err= 0: pid=1952583: Wed Oct 9 11:07:49 2024 00:27:29.399 write: IOPS=341, BW=85.3MiB/s (89.4MB/s)(863MiB/10118msec); 0 zone resets 00:27:29.399 slat (usec): min=26, max=73653, avg=2896.18, stdev=5478.35 00:27:29.399 clat (msec): min=16, max=281, avg=184.68, stdev=47.84 00:27:29.399 lat (msec): min=16, max=282, avg=187.57, stdev=48.30 00:27:29.399 clat percentiles (msec): 00:27:29.399 | 1.00th=[ 63], 5.00th=[ 101], 10.00th=[ 136], 20.00th=[ 153], 00:27:29.399 | 30.00th=[ 163], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 186], 00:27:29.399 | 70.00th=[ 207], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 259], 00:27:29.399 | 99.00th=[ 264], 99.50th=[ 268], 99.90th=[ 271], 99.95th=[ 284], 00:27:29.399 | 99.99th=[ 284] 00:27:29.399 bw ( KiB/s): min=63488, max=144384, per=8.45%, avg=86732.80, stdev=21077.20, samples=20 00:27:29.399 iops : min= 248, max= 564, avg=338.80, stdev=82.33, samples=20 00:27:29.399 lat (msec) : 20=0.12%, 50=0.58%, 100=4.23%, 250=82.01%, 500=13.07% 00:27:29.399 cpu : usr=0.76%, sys=0.86%, ctx=832, majf=0, minf=1 00:27:29.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:29.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.399 issued rwts: total=0,3451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.399 job1: (groupid=0, jobs=1): err= 0: pid=1952615: Wed Oct 9 11:07:49 2024 00:27:29.399 write: IOPS=430, BW=108MiB/s (113MB/s)(1088MiB/10112msec); 0 zone resets 00:27:29.399 slat (usec): min=27, max=20804, avg=2234.55, stdev=4031.74 00:27:29.399 clat (msec): min=7, max=255, avg=146.45, stdev=32.31 00:27:29.399 lat (msec): min=7, max=256, avg=148.69, stdev=32.46 00:27:29.399 clat percentiles (msec): 00:27:29.399 | 1.00th=[ 67], 5.00th=[ 120], 10.00th=[ 128], 20.00th=[ 132], 00:27:29.399 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:27:29.399 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 207], 95.00th=[ 224], 00:27:29.399 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 257], 00:27:29.399 | 99.99th=[ 257] 00:27:29.399 bw ( KiB/s): min=73728, max=120832, per=10.70%, avg=109772.80, stdev=15444.56, samples=20 00:27:29.399 iops : min= 288, max= 472, avg=428.80, stdev=60.33, samples=20 00:27:29.399 lat (msec) : 10=0.02%, 20=0.09%, 50=0.32%, 100=2.34%, 250=96.60% 00:27:29.399 lat (msec) : 500=0.62% 00:27:29.399 cpu : usr=0.94%, sys=1.30%, ctx=1170, majf=0, minf=1 00:27:29.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:29.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.399 issued rwts: total=0,4351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.399 job2: (groupid=0, jobs=1): err= 0: pid=1952640: Wed Oct 9 11:07:49 2024 00:27:29.399 write: IOPS=441, BW=110MiB/s (116MB/s)(1117MiB/10111msec); 0 zone resets 00:27:29.399 slat (usec): min=29, max=74998, avg=2223.78, stdev=4160.37 00:27:29.399 clat (msec): min=19, max=257, avg=142.55, stdev=25.20 00:27:29.399 lat (msec): min=19, max=263, avg=144.77, stdev=25.33 00:27:29.399 clat percentiles (msec): 00:27:29.399 | 1.00th=[ 70], 5.00th=[ 117], 10.00th=[ 128], 20.00th=[ 132], 00:27:29.399 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:27:29.399 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 184], 00:27:29.399 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 257], 99.95th=[ 257], 00:27:29.399 | 99.99th=[ 257] 00:27:29.399 bw ( KiB/s): min=71680, max=128512, per=10.99%, avg=112756.10, stdev=13814.03, samples=20 00:27:29.399 iops : min= 280, max= 502, avg=440.45, stdev=53.96, samples=20 00:27:29.399 lat (msec) : 20=0.09%, 50=0.45%, 100=1.21%, 250=97.52%, 500=0.74% 00:27:29.399 cpu : usr=0.92%, sys=1.66%, ctx=1109, majf=0, minf=1 00:27:29.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:29.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.399 issued rwts: total=0,4468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.399 job3: (groupid=0, jobs=1): err= 0: pid=1952653: Wed Oct 9 11:07:49 2024 00:27:29.399 write: IOPS=264, BW=66.2MiB/s (69.4MB/s)(673MiB/10166msec); 0 zone resets 00:27:29.399 slat (usec): min=28, max=50839, avg=3507.71, stdev=6889.36 00:27:29.399 clat (msec): min=17, max=523, avg=238.27, stdev=90.74 00:27:29.399 lat (msec): min=17, max=523, avg=241.77, stdev=91.71 00:27:29.399 clat percentiles (msec): 00:27:29.399 | 1.00th=[ 50], 5.00th=[ 59], 10.00th=[ 90], 20.00th=[ 176], 00:27:29.399 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 236], 60.00th=[ 253], 00:27:29.399 | 70.00th=[ 284], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 368], 00:27:29.399 | 99.00th=[ 426], 99.50th=[ 460], 99.90th=[ 502], 99.95th=[ 523], 00:27:29.399 | 99.99th=[ 523] 00:27:29.399 bw ( KiB/s): min=45056, max=186368, per=6.55%, avg=67225.60, stdev=31341.63, samples=20 00:27:29.399 iops : min= 176, max= 728, avg=262.60, stdev=122.43, samples=20 00:27:29.399 lat (msec) : 20=0.15%, 50=2.23%, 100=8.25%, 250=48.07%, 500=41.08% 00:27:29.399 lat (msec) : 750=0.22% 00:27:29.399 cpu : usr=0.73%, sys=0.70%, ctx=730, majf=0, minf=1 00:27:29.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:29.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.399 issued rwts: total=0,2690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.399 job4: (groupid=0, jobs=1): err= 0: pid=1952659: Wed Oct 9 11:07:49 2024 00:27:29.399 write: IOPS=315, BW=79.0MiB/s (82.8MB/s)(799MiB/10117msec); 0 zone resets 00:27:29.399 slat (usec): min=28, max=109783, avg=2956.58, stdev=6040.46 00:27:29.399 clat (msec): min=5, max=429, avg=199.57, stdev=69.52 00:27:29.400 lat (msec): min=6, max=435, avg=202.52, stdev=70.52 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 19], 5.00th=[ 58], 10.00th=[ 125], 20.00th=[ 165], 00:27:29.400 | 30.00th=[ 171], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 220], 00:27:29.400 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 264], 95.00th=[ 309], 00:27:29.400 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 422], 00:27:29.400 | 99.99th=[ 430] 00:27:29.400 bw ( KiB/s): min=49152, max=137216, per=7.82%, avg=80204.80, stdev=21951.84, samples=20 00:27:29.400 iops : min= 192, max= 536, avg=313.30, stdev=85.75, samples=20 00:27:29.400 lat (msec) : 10=0.22%, 20=0.94%, 50=3.13%, 100=4.22%, 250=69.15% 00:27:29.400 lat (msec) : 500=22.34% 00:27:29.400 cpu : usr=0.76%, sys=1.14%, ctx=1040, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,3196,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job5: (groupid=0, jobs=1): err= 0: pid=1952683: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=443, BW=111MiB/s (116MB/s)(1121MiB/10112msec); 0 zone resets 00:27:29.400 slat (usec): min=27, max=20975, avg=2228.56, stdev=3912.23 00:27:29.400 clat (msec): min=19, max=258, avg=142.11, stdev=24.48 00:27:29.400 lat (msec): min=19, max=264, avg=144.34, stdev=24.55 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 94], 5.00th=[ 110], 10.00th=[ 128], 20.00th=[ 131], 00:27:29.400 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:27:29.400 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 184], 00:27:29.400 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 257], 99.95th=[ 259], 00:27:29.400 | 99.99th=[ 259] 00:27:29.400 bw ( KiB/s): min=72704, max=135168, per=11.03%, avg=113126.40, stdev=14393.13, samples=20 00:27:29.400 iops : min= 284, max= 528, avg=441.90, stdev=56.22, samples=20 00:27:29.400 lat (msec) : 20=0.09%, 50=0.18%, 100=1.67%, 250=97.43%, 500=0.62% 00:27:29.400 cpu : usr=0.94%, sys=1.13%, ctx=1108, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,4482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job6: (groupid=0, jobs=1): err= 0: pid=1952695: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=257, BW=64.4MiB/s (67.5MB/s)(655MiB/10168msec); 0 zone resets 00:27:29.400 slat (usec): min=32, max=108868, avg=3620.43, stdev=7619.77 00:27:29.400 clat (msec): min=21, max=506, avg=244.00, stdev=90.29 00:27:29.400 lat (msec): min=21, max=506, avg=247.62, stdev=91.51 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 39], 5.00th=[ 104], 10.00th=[ 122], 20.00th=[ 150], 00:27:29.400 | 30.00th=[ 218], 40.00th=[ 228], 50.00th=[ 236], 60.00th=[ 249], 00:27:29.400 | 70.00th=[ 284], 80.00th=[ 330], 90.00th=[ 363], 95.00th=[ 388], 00:27:29.400 | 99.00th=[ 481], 99.50th=[ 489], 99.90th=[ 493], 99.95th=[ 506], 00:27:29.400 | 99.99th=[ 506] 00:27:29.400 bw ( KiB/s): min=36864, max=125440, per=6.37%, avg=65382.40, stdev=22354.07, samples=20 00:27:29.400 iops : min= 144, max= 490, avg=255.40, stdev=87.32, samples=20 00:27:29.400 lat (msec) : 50=1.57%, 100=2.98%, 250=56.15%, 500=39.23%, 750=0.08% 00:27:29.400 cpu : usr=0.55%, sys=0.83%, ctx=809, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,2618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job7: (groupid=0, jobs=1): err= 0: pid=1952706: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=443, BW=111MiB/s (116MB/s)(1120MiB/10110msec); 0 zone resets 00:27:29.400 slat (usec): min=25, max=24498, avg=2222.41, stdev=3917.25 00:27:29.400 clat (msec): min=8, max=262, avg=142.19, stdev=24.87 00:27:29.400 lat (msec): min=8, max=262, avg=144.41, stdev=24.98 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 95], 5.00th=[ 111], 10.00th=[ 128], 20.00th=[ 131], 00:27:29.400 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 138], 60.00th=[ 140], 00:27:29.400 | 70.00th=[ 140], 80.00th=[ 150], 90.00th=[ 169], 95.00th=[ 182], 00:27:29.400 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 259], 99.95th=[ 259], 00:27:29.400 | 99.99th=[ 264] 00:27:29.400 bw ( KiB/s): min=76288, max=135680, per=11.02%, avg=113049.60, stdev=13836.07, samples=20 00:27:29.400 iops : min= 298, max= 530, avg=441.60, stdev=54.05, samples=20 00:27:29.400 lat (msec) : 10=0.02%, 20=0.22%, 50=0.18%, 100=1.47%, 250=97.81% 00:27:29.400 lat (msec) : 500=0.29% 00:27:29.400 cpu : usr=0.78%, sys=1.33%, ctx=1120, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,4479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job8: (groupid=0, jobs=1): err= 0: pid=1952723: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=475, BW=119MiB/s (125MB/s)(1210MiB/10174msec); 0 zone resets 00:27:29.400 slat (usec): min=14, max=179335, avg=1930.35, stdev=6359.53 00:27:29.400 clat (msec): min=2, max=498, avg=132.52, stdev=111.97 00:27:29.400 lat (msec): min=2, max=498, avg=134.45, stdev=113.48 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 37], 20.00th=[ 61], 00:27:29.400 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:27:29.400 | 70.00th=[ 138], 80.00th=[ 234], 90.00th=[ 338], 95.00th=[ 368], 00:27:29.400 | 99.00th=[ 468], 99.50th=[ 477], 99.90th=[ 493], 99.95th=[ 493], 00:27:29.400 | 99.99th=[ 498] 00:27:29.400 bw ( KiB/s): min=32768, max=272384, per=11.92%, avg=122291.20, stdev=81444.56, samples=20 00:27:29.400 iops : min= 128, max= 1064, avg=477.70, stdev=318.14, samples=20 00:27:29.400 lat (msec) : 4=0.29%, 10=2.02%, 20=2.97%, 50=9.32%, 100=47.32% 00:27:29.400 lat (msec) : 250=23.09%, 500=14.98% 00:27:29.400 cpu : usr=0.83%, sys=1.64%, ctx=1654, majf=0, minf=2 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,4841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job9: (groupid=0, jobs=1): err= 0: pid=1952724: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=342, BW=85.7MiB/s (89.9MB/s)(867MiB/10117msec); 0 zone resets 00:27:29.400 slat (usec): min=24, max=81791, avg=2879.77, stdev=5388.18 00:27:29.400 clat (msec): min=58, max=284, avg=183.70, stdev=47.86 00:27:29.400 lat (msec): min=58, max=285, avg=186.58, stdev=48.31 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 64], 5.00th=[ 95], 10.00th=[ 134], 20.00th=[ 150], 00:27:29.400 | 30.00th=[ 163], 40.00th=[ 171], 50.00th=[ 174], 60.00th=[ 186], 00:27:29.400 | 70.00th=[ 207], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 257], 00:27:29.400 | 99.00th=[ 264], 99.50th=[ 266], 99.90th=[ 275], 99.95th=[ 284], 00:27:29.400 | 99.99th=[ 284] 00:27:29.400 bw ( KiB/s): min=61440, max=145920, per=8.50%, avg=87193.60, stdev=22488.83, samples=20 00:27:29.400 iops : min= 240, max= 570, avg=340.60, stdev=87.85, samples=20 00:27:29.400 lat (msec) : 100=5.51%, 250=81.64%, 500=12.86% 00:27:29.400 cpu : usr=0.61%, sys=0.99%, ctx=855, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,3469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 job10: (groupid=0, jobs=1): err= 0: pid=1952725: Wed Oct 9 11:07:49 2024 00:27:29.400 write: IOPS=268, BW=67.2MiB/s (70.5MB/s)(683MiB/10164msec); 0 zone resets 00:27:29.400 slat (usec): min=31, max=124717, avg=3420.44, stdev=7173.67 00:27:29.400 clat (msec): min=23, max=528, avg=234.49, stdev=82.31 00:27:29.400 lat (msec): min=23, max=528, avg=237.91, stdev=83.29 00:27:29.400 clat percentiles (msec): 00:27:29.400 | 1.00th=[ 64], 5.00th=[ 111], 10.00th=[ 140], 20.00th=[ 155], 00:27:29.400 | 30.00th=[ 182], 40.00th=[ 228], 50.00th=[ 243], 60.00th=[ 251], 00:27:29.400 | 70.00th=[ 259], 80.00th=[ 279], 90.00th=[ 363], 95.00th=[ 376], 00:27:29.400 | 99.00th=[ 443], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 527], 00:27:29.400 | 99.99th=[ 527] 00:27:29.400 bw ( KiB/s): min=44032, max=120832, per=6.66%, avg=68352.00, stdev=21220.79, samples=20 00:27:29.400 iops : min= 172, max= 472, avg=267.00, stdev=82.89, samples=20 00:27:29.400 lat (msec) : 50=0.29%, 100=3.95%, 250=55.10%, 500=40.43%, 750=0.22% 00:27:29.400 cpu : usr=0.63%, sys=0.88%, ctx=847, majf=0, minf=1 00:27:29.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:29.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:29.400 issued rwts: total=0,2733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:29.400 00:27:29.400 Run status group 0 (all jobs): 00:27:29.400 WRITE: bw=1002MiB/s (1051MB/s), 64.4MiB/s-119MiB/s (67.5MB/s-125MB/s), io=9.96GiB (10.7GB), run=10110-10174msec 00:27:29.400 00:27:29.400 Disk stats (read/write): 00:27:29.400 nvme0n1: ios=50/6843, merge=0/0, ticks=4970/1216774, in_queue=1221744, util=99.92% 00:27:29.400 nvme10n1: ios=47/8657, merge=0/0, ticks=90/1227008, in_queue=1227098, util=97.05% 00:27:29.400 nvme1n1: ios=50/8892, merge=0/0, ticks=2058/1220101, in_queue=1222159, util=100.00% 00:27:29.400 nvme2n1: ios=20/5290, merge=0/0, ticks=237/1215236, in_queue=1215473, util=97.32% 00:27:29.400 nvme3n1: ios=13/6336, merge=0/0, ticks=16/1226044, in_queue=1226060, util=97.30% 00:27:29.400 nvme4n1: ios=0/8918, merge=0/0, ticks=0/1226369, in_queue=1226369, util=97.72% 00:27:29.400 nvme5n1: ios=44/5148, merge=0/0, ticks=1075/1205481, in_queue=1206556, util=100.00% 00:27:29.400 nvme6n1: ios=0/8916, merge=0/0, ticks=0/1226591, in_queue=1226591, util=98.11% 00:27:29.400 nvme7n1: ios=40/9577, merge=0/0, ticks=2930/1166641, in_queue=1169571, util=100.00% 00:27:29.400 nvme8n1: ios=0/6882, merge=0/0, ticks=0/1225024, in_queue=1225024, util=98.89% 00:27:29.401 nvme9n1: ios=0/5381, merge=0/0, ticks=0/1218215, in_queue=1218215, util=99.09% 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:29.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:29.401 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.662 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:29.922 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.922 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:30.183 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.183 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:30.443 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:30.443 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:31.014 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:31.014 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:31.014 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.014 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.015 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:31.015 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:31.015 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:31.015 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.015 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.015 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:31.276 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:31.276 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:31.536 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.536 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:31.797 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:31.797 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:31.797 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:32.058 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:32.058 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.058 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:32.058 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.058 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.058 rmmod nvme_tcp 00:27:32.058 rmmod nvme_fabrics 00:27:32.058 rmmod nvme_keyring 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 1942217 ']' 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 1942217 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1942217 ']' 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1942217 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1942217 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1942217' 00:27:32.319 killing process with pid 1942217 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1942217 00:27:32.319 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1942217 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.580 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.493 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.493 00:27:34.493 real 1m17.862s 00:27:34.493 user 5m0.684s 00:27:34.493 sys 0m15.406s 00:27:34.493 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.493 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:34.493 ************************************ 00:27:34.493 END TEST nvmf_multiconnection 00:27:34.493 ************************************ 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:34.754 ************************************ 00:27:34.754 START TEST nvmf_initiator_timeout 00:27:34.754 ************************************ 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:34.754 * Looking for test storage... 00:27:34.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:27:34.754 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.016 --rc genhtml_branch_coverage=1 00:27:35.016 --rc genhtml_function_coverage=1 00:27:35.016 --rc genhtml_legend=1 00:27:35.016 --rc geninfo_all_blocks=1 00:27:35.016 --rc geninfo_unexecuted_blocks=1 00:27:35.016 00:27:35.016 ' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.016 --rc genhtml_branch_coverage=1 00:27:35.016 --rc genhtml_function_coverage=1 00:27:35.016 --rc genhtml_legend=1 00:27:35.016 --rc geninfo_all_blocks=1 00:27:35.016 --rc geninfo_unexecuted_blocks=1 00:27:35.016 00:27:35.016 ' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.016 --rc genhtml_branch_coverage=1 00:27:35.016 --rc genhtml_function_coverage=1 00:27:35.016 --rc genhtml_legend=1 00:27:35.016 --rc geninfo_all_blocks=1 00:27:35.016 --rc geninfo_unexecuted_blocks=1 00:27:35.016 00:27:35.016 ' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:35.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.016 --rc genhtml_branch_coverage=1 00:27:35.016 --rc genhtml_function_coverage=1 00:27:35.016 --rc genhtml_legend=1 00:27:35.016 --rc geninfo_all_blocks=1 00:27:35.016 --rc geninfo_unexecuted_blocks=1 00:27:35.016 00:27:35.016 ' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.016 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.017 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:43.161 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:43.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:43.161 Found net devices under 0000:31:00.0: cvl_0_0 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:43.161 Found net devices under 0000:31:00.1: cvl_0_1 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.161 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:27:43.162 00:27:43.162 --- 10.0.0.2 ping statistics --- 00:27:43.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.162 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:43.162 00:27:43.162 --- 10.0.0.1 ping statistics --- 00:27:43.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.162 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=1959397 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 1959397 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1959397 ']' 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.162 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.162 [2024-10-09 11:08:02.518006] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:27:43.162 [2024-10-09 11:08:02.518076] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.162 [2024-10-09 11:08:02.659718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:43.162 [2024-10-09 11:08:02.692004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.162 [2024-10-09 11:08:02.714581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.162 [2024-10-09 11:08:02.714618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.162 [2024-10-09 11:08:02.714626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.162 [2024-10-09 11:08:02.714633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.162 [2024-10-09 11:08:02.714640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.162 [2024-10-09 11:08:02.716348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.162 [2024-10-09 11:08:02.716475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.162 [2024-10-09 11:08:02.716587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.162 [2024-10-09 11:08:02.716588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.423 Malloc0 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.423 Delay0 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.423 [2024-10-09 11:08:03.414604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.423 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:43.424 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.424 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:43.684 [2024-10-09 11:08:03.454786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.684 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:45.069 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:45.069 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:45.069 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:45.069 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:45.069 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1960119 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:46.978 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:47.262 [global] 00:27:47.262 thread=1 00:27:47.262 invalidate=1 00:27:47.262 rw=write 00:27:47.262 time_based=1 00:27:47.262 runtime=60 00:27:47.262 ioengine=libaio 00:27:47.262 direct=1 00:27:47.262 bs=4096 00:27:47.262 iodepth=1 00:27:47.262 norandommap=0 00:27:47.262 numjobs=1 00:27:47.262 00:27:47.262 verify_dump=1 00:27:47.262 verify_backlog=512 00:27:47.262 verify_state_save=0 00:27:47.262 do_verify=1 00:27:47.262 verify=crc32c-intel 00:27:47.262 [job0] 00:27:47.262 filename=/dev/nvme0n1 00:27:47.262 Could not set queue depth (nvme0n1) 00:27:47.526 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:47.526 fio-3.35 00:27:47.526 Starting 1 thread 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.068 true 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.068 true 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.068 true 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.068 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:50.068 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.068 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.068 true 00:27:50.068 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.068 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.371 true 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.371 true 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.371 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.372 true 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:53.372 true 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:53.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1960119 00:28:49.765 00:28:49.765 job0: (groupid=0, jobs=1): err= 0: pid=1960426: Wed Oct 9 11:09:07 2024 00:28:49.765 read: IOPS=85, BW=341KiB/s (350kB/s)(20.0MiB/60001msec) 00:28:49.765 slat (nsec): min=7938, max=81932, avg=27489.11, stdev=3263.40 00:28:49.765 clat (usec): min=474, max=41930k, avg=11001.39, stdev=586008.37 00:28:49.765 lat (usec): min=502, max=41930k, avg=11028.88, stdev=586008.38 00:28:49.765 clat percentiles (usec): 00:28:49.765 | 1.00th=[ 816], 5.00th=[ 881], 10.00th=[ 922], 00:28:49.765 | 20.00th=[ 971], 30.00th=[ 996], 40.00th=[ 1012], 00:28:49.765 | 50.00th=[ 1029], 60.00th=[ 1037], 70.00th=[ 1057], 00:28:49.765 | 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1205], 00:28:49.765 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:49.765 | 99.95th=[ 44303], 99.99th=[17112761] 00:28:49.765 write: IOPS=91, BW=367KiB/s (376kB/s)(21.5MiB/60001msec); 0 zone resets 00:28:49.765 slat (usec): min=9, max=26834, avg=38.91, stdev=394.67 00:28:49.765 clat (usec): min=216, max=1492, avg=585.81, stdev=100.16 00:28:49.765 lat (usec): min=228, max=27495, avg=624.72, stdev=409.71 00:28:49.765 clat percentiles (usec): 00:28:49.765 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 510], 00:28:49.765 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:28:49.765 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:28:49.765 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 1188], 99.95th=[ 1336], 00:28:49.765 | 99.99th=[ 1500] 00:28:49.765 bw ( KiB/s): min= 8, max= 4096, per=100.00%, avg=2409.41, stdev=1285.70, samples=17 00:28:49.765 iops : min= 2, max= 1024, avg=602.35, stdev=321.43, samples=17 00:28:49.765 lat (usec) : 250=0.16%, 500=9.39%, 750=41.19%, 1000=16.78% 00:28:49.765 lat (msec) : 2=30.36%, 50=2.12%, >=2000=0.01% 00:28:49.765 cpu : usr=0.35%, sys=0.62%, ctx=10636, majf=0, minf=1 00:28:49.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.765 issued rwts: total=5120,5510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:49.765 00:28:49.765 Run status group 0 (all jobs): 00:28:49.765 READ: bw=341KiB/s (350kB/s), 341KiB/s-341KiB/s (350kB/s-350kB/s), io=20.0MiB (21.0MB), run=60001-60001msec 00:28:49.765 WRITE: bw=367KiB/s (376kB/s), 367KiB/s-367KiB/s (376kB/s-376kB/s), io=21.5MiB (22.6MB), run=60001-60001msec 00:28:49.765 00:28:49.765 Disk stats (read/write): 00:28:49.765 nvme0n1: ios=5173/5347, merge=0/0, ticks=15629/2648, in_queue=18277, util=99.92% 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:49.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:49.765 nvmf hotplug test: fio successful as expected 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:49.765 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.766 rmmod nvme_tcp 00:28:49.766 rmmod nvme_fabrics 00:28:49.766 rmmod nvme_keyring 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 1959397 ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1959397 ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959397' 00:28:49.766 killing process with pid 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1959397 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.766 11:09:07 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.026 00:28:50.026 real 1m15.382s 00:28:50.026 user 4m36.038s 00:28:50.026 sys 0m8.170s 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:50.026 ************************************ 00:28:50.026 END TEST nvmf_initiator_timeout 00:28:50.026 ************************************ 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.026 11:09:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:58.172 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:58.172 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.172 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:58.173 Found net devices under 0000:31:00.0: cvl_0_0 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:58.173 Found net devices under 0000:31:00.1: cvl_0_1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:58.173 ************************************ 00:28:58.173 START TEST nvmf_perf_adq 00:28:58.173 ************************************ 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:58.173 * Looking for test storage... 00:28:58.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:58.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.173 --rc genhtml_branch_coverage=1 00:28:58.173 --rc genhtml_function_coverage=1 00:28:58.173 --rc genhtml_legend=1 00:28:58.173 --rc geninfo_all_blocks=1 00:28:58.173 --rc geninfo_unexecuted_blocks=1 00:28:58.173 00:28:58.173 ' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:58.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.173 --rc genhtml_branch_coverage=1 00:28:58.173 --rc genhtml_function_coverage=1 00:28:58.173 --rc genhtml_legend=1 00:28:58.173 --rc geninfo_all_blocks=1 00:28:58.173 --rc geninfo_unexecuted_blocks=1 00:28:58.173 00:28:58.173 ' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:58.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.173 --rc genhtml_branch_coverage=1 00:28:58.173 --rc genhtml_function_coverage=1 00:28:58.173 --rc genhtml_legend=1 00:28:58.173 --rc geninfo_all_blocks=1 00:28:58.173 --rc geninfo_unexecuted_blocks=1 00:28:58.173 00:28:58.173 ' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:58.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.173 --rc genhtml_branch_coverage=1 00:28:58.173 --rc genhtml_function_coverage=1 00:28:58.173 --rc genhtml_legend=1 00:28:58.173 --rc geninfo_all_blocks=1 00:28:58.173 --rc geninfo_unexecuted_blocks=1 00:28:58.173 00:28:58.173 ' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.173 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:58.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.174 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:04.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.778 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:04.778 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:04.779 Found net devices under 0000:31:00.0: cvl_0_0 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:04.779 Found net devices under 0000:31:00.1: cvl_0_1 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:04.779 11:09:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:06.689 11:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:08.604 11:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:13.890 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:13.890 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:13.890 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:13.890 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:13.891 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:13.891 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:13.891 Found net devices under 0000:31:00.0: cvl_0_0 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:13.891 Found net devices under 0000:31:00.1: cvl_0_1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:29:13.891 00:29:13.891 --- 10.0.0.2 ping statistics --- 00:29:13.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.891 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:29:13.891 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:29:13.892 00:29:13.892 --- 10.0.0.1 ping statistics --- 00:29:13.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.892 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1982091 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1982091 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1982091 ']' 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:13.892 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:13.892 [2024-10-09 11:09:33.826563] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:29:13.892 [2024-10-09 11:09:33.826629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.152 [2024-10-09 11:09:33.969253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.152 [2024-10-09 11:09:34.002990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.152 [2024-10-09 11:09:34.026622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.152 [2024-10-09 11:09:34.026664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.152 [2024-10-09 11:09:34.026672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.152 [2024-10-09 11:09:34.026679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.152 [2024-10-09 11:09:34.026685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.152 [2024-10-09 11:09:34.028749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.152 [2024-10-09 11:09:34.028874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.152 [2024-10-09 11:09:34.029033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.152 [2024-10-09 11:09:34.029033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.721 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 [2024-10-09 11:09:34.802291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 Malloc1 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.982 [2024-10-09 11:09:34.871734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1982299 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:14.982 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:16.892 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:16.892 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.892 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:17.152 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:17.153 "tick_rate": 2394400000, 00:29:17.153 "poll_groups": [ 00:29:17.153 { 00:29:17.153 "name": "nvmf_tgt_poll_group_000", 00:29:17.153 "admin_qpairs": 1, 00:29:17.153 "io_qpairs": 1, 00:29:17.153 "current_admin_qpairs": 1, 00:29:17.153 "current_io_qpairs": 1, 00:29:17.153 "pending_bdev_io": 0, 00:29:17.153 "completed_nvme_io": 20309, 00:29:17.153 "transports": [ 00:29:17.153 { 00:29:17.153 "trtype": "TCP" 00:29:17.153 } 00:29:17.153 ] 00:29:17.153 }, 00:29:17.153 { 00:29:17.153 "name": "nvmf_tgt_poll_group_001", 00:29:17.153 "admin_qpairs": 0, 00:29:17.153 "io_qpairs": 1, 00:29:17.153 "current_admin_qpairs": 0, 00:29:17.153 "current_io_qpairs": 1, 00:29:17.153 "pending_bdev_io": 0, 00:29:17.153 "completed_nvme_io": 26294, 00:29:17.153 "transports": [ 00:29:17.153 { 00:29:17.153 "trtype": "TCP" 00:29:17.153 } 00:29:17.153 ] 00:29:17.153 }, 00:29:17.153 { 00:29:17.153 "name": "nvmf_tgt_poll_group_002", 00:29:17.153 "admin_qpairs": 0, 00:29:17.153 "io_qpairs": 1, 00:29:17.153 "current_admin_qpairs": 0, 00:29:17.153 "current_io_qpairs": 1, 00:29:17.153 "pending_bdev_io": 0, 00:29:17.153 "completed_nvme_io": 19521, 00:29:17.153 "transports": [ 00:29:17.153 { 00:29:17.153 "trtype": "TCP" 00:29:17.153 } 00:29:17.153 ] 00:29:17.153 }, 00:29:17.153 { 00:29:17.153 "name": "nvmf_tgt_poll_group_003", 00:29:17.153 "admin_qpairs": 0, 00:29:17.153 "io_qpairs": 1, 00:29:17.153 "current_admin_qpairs": 0, 00:29:17.153 "current_io_qpairs": 1, 00:29:17.153 "pending_bdev_io": 0, 00:29:17.153 "completed_nvme_io": 18811, 00:29:17.153 "transports": [ 00:29:17.153 { 00:29:17.153 "trtype": "TCP" 00:29:17.153 } 00:29:17.153 ] 00:29:17.153 } 00:29:17.153 ] 00:29:17.153 }' 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:17.153 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1982299 00:29:25.290 Initializing NVMe Controllers 00:29:25.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:25.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:25.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:25.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:25.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:25.290 Initialization complete. Launching workers. 00:29:25.290 ======================================================== 00:29:25.290 Latency(us) 00:29:25.290 Device Information : IOPS MiB/s Average min max 00:29:25.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10998.40 42.96 5819.68 1700.02 9334.05 00:29:25.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14778.50 57.73 4329.79 1442.23 9186.84 00:29:25.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14202.20 55.48 4505.56 1132.09 11867.68 00:29:25.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14139.30 55.23 4526.47 1330.44 11490.69 00:29:25.290 ======================================================== 00:29:25.290 Total : 54118.39 211.40 4730.09 1132.09 11867.68 00:29:25.290 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.290 rmmod nvme_tcp 00:29:25.290 rmmod nvme_fabrics 00:29:25.290 rmmod nvme_keyring 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1982091 ']' 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1982091 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1982091 ']' 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1982091 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1982091 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1982091' 00:29:25.290 killing process with pid 1982091 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1982091 00:29:25.290 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1982091 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.550 11:09:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.466 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.725 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:27.726 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:27.726 11:09:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:29.636 11:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:31.545 11:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:36.834 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:36.834 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.834 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:36.834 Found net devices under 0000:31:00.0: cvl_0_0 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:36.835 Found net devices under 0000:31:00.1: cvl_0_1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:29:36.835 00:29:36.835 --- 10.0.0.2 ping statistics --- 00:29:36.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.835 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:29:36.835 00:29:36.835 --- 10.0.0.1 ping statistics --- 00:29:36.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.835 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:36.835 net.core.busy_poll = 1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:36.835 net.core.busy_read = 1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:36.835 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1986886 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1986886 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1986886 ']' 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.097 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.097 [2024-10-09 11:09:56.905313] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:29:37.097 [2024-10-09 11:09:56.905379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.097 [2024-10-09 11:09:57.046923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:37.097 [2024-10-09 11:09:57.080590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.357 [2024-10-09 11:09:57.104235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.357 [2024-10-09 11:09:57.104277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.357 [2024-10-09 11:09:57.104285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.357 [2024-10-09 11:09:57.104292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.357 [2024-10-09 11:09:57.104298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.357 [2024-10-09 11:09:57.106327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.357 [2024-10-09 11:09:57.106441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.357 [2024-10-09 11:09:57.106583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.357 [2024-10-09 11:09:57.106584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 [2024-10-09 11:09:57.875779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 Malloc1 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.928 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:38.189 [2024-10-09 11:09:57.945685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1987115 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:38.189 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:40.101 "tick_rate": 2394400000, 00:29:40.101 "poll_groups": [ 00:29:40.101 { 00:29:40.101 "name": "nvmf_tgt_poll_group_000", 00:29:40.101 "admin_qpairs": 1, 00:29:40.101 "io_qpairs": 4, 00:29:40.101 "current_admin_qpairs": 1, 00:29:40.101 "current_io_qpairs": 4, 00:29:40.101 "pending_bdev_io": 0, 00:29:40.101 "completed_nvme_io": 33125, 00:29:40.101 "transports": [ 00:29:40.101 { 00:29:40.101 "trtype": "TCP" 00:29:40.101 } 00:29:40.101 ] 00:29:40.101 }, 00:29:40.101 { 00:29:40.101 "name": "nvmf_tgt_poll_group_001", 00:29:40.101 "admin_qpairs": 0, 00:29:40.101 "io_qpairs": 0, 00:29:40.101 "current_admin_qpairs": 0, 00:29:40.101 "current_io_qpairs": 0, 00:29:40.101 "pending_bdev_io": 0, 00:29:40.101 "completed_nvme_io": 0, 00:29:40.101 "transports": [ 00:29:40.101 { 00:29:40.101 "trtype": "TCP" 00:29:40.101 } 00:29:40.101 ] 00:29:40.101 }, 00:29:40.101 { 00:29:40.101 "name": "nvmf_tgt_poll_group_002", 00:29:40.101 "admin_qpairs": 0, 00:29:40.101 "io_qpairs": 0, 00:29:40.101 "current_admin_qpairs": 0, 00:29:40.101 "current_io_qpairs": 0, 00:29:40.101 "pending_bdev_io": 0, 00:29:40.101 "completed_nvme_io": 0, 00:29:40.101 "transports": [ 00:29:40.101 { 00:29:40.101 "trtype": "TCP" 00:29:40.101 } 00:29:40.101 ] 00:29:40.101 }, 00:29:40.101 { 00:29:40.101 "name": "nvmf_tgt_poll_group_003", 00:29:40.101 "admin_qpairs": 0, 00:29:40.101 "io_qpairs": 0, 00:29:40.101 "current_admin_qpairs": 0, 00:29:40.101 "current_io_qpairs": 0, 00:29:40.101 "pending_bdev_io": 0, 00:29:40.101 "completed_nvme_io": 0, 00:29:40.101 "transports": [ 00:29:40.101 { 00:29:40.101 "trtype": "TCP" 00:29:40.101 } 00:29:40.101 ] 00:29:40.101 } 00:29:40.101 ] 00:29:40.101 }' 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:40.101 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:40.102 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:29:40.102 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:29:40.102 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1987115 00:29:48.237 Initializing NVMe Controllers 00:29:48.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:48.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:48.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:48.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:48.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:48.237 Initialization complete. Launching workers. 00:29:48.237 ======================================================== 00:29:48.237 Latency(us) 00:29:48.237 Device Information : IOPS MiB/s Average min max 00:29:48.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6699.40 26.17 9585.04 1125.61 60543.65 00:29:48.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6617.90 25.85 9673.51 1378.73 59989.23 00:29:48.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5872.40 22.94 10901.83 1315.69 57159.81 00:29:48.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5903.90 23.06 10844.25 1390.48 58660.15 00:29:48.237 ======================================================== 00:29:48.237 Total : 25093.59 98.02 10212.79 1125.61 60543.65 00:29:48.237 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.237 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.496 rmmod nvme_tcp 00:29:48.496 rmmod nvme_fabrics 00:29:48.496 rmmod nvme_keyring 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1986886 ']' 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1986886 ']' 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1986886' 00:29:48.496 killing process with pid 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1986886 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:48.496 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.757 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:52.056 00:29:52.056 real 0m54.325s 00:29:52.056 user 2m50.142s 00:29:52.056 sys 0m10.980s 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:52.056 ************************************ 00:29:52.056 END TEST nvmf_perf_adq 00:29:52.056 ************************************ 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.056 ************************************ 00:29:52.056 START TEST nvmf_shutdown 00:29:52.056 ************************************ 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:52.056 * Looking for test storage... 00:29:52.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.056 --rc genhtml_branch_coverage=1 00:29:52.056 --rc genhtml_function_coverage=1 00:29:52.056 --rc genhtml_legend=1 00:29:52.056 --rc geninfo_all_blocks=1 00:29:52.056 --rc geninfo_unexecuted_blocks=1 00:29:52.056 00:29:52.056 ' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.056 --rc genhtml_branch_coverage=1 00:29:52.056 --rc genhtml_function_coverage=1 00:29:52.056 --rc genhtml_legend=1 00:29:52.056 --rc geninfo_all_blocks=1 00:29:52.056 --rc geninfo_unexecuted_blocks=1 00:29:52.056 00:29:52.056 ' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.056 --rc genhtml_branch_coverage=1 00:29:52.056 --rc genhtml_function_coverage=1 00:29:52.056 --rc genhtml_legend=1 00:29:52.056 --rc geninfo_all_blocks=1 00:29:52.056 --rc geninfo_unexecuted_blocks=1 00:29:52.056 00:29:52.056 ' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.056 --rc genhtml_branch_coverage=1 00:29:52.056 --rc genhtml_function_coverage=1 00:29:52.056 --rc genhtml_legend=1 00:29:52.056 --rc geninfo_all_blocks=1 00:29:52.056 --rc geninfo_unexecuted_blocks=1 00:29:52.056 00:29:52.056 ' 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.056 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.057 ************************************ 00:29:52.057 START TEST nvmf_shutdown_tc1 00:29:52.057 ************************************ 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.057 11:10:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:00.192 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:00.192 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:00.192 Found net devices under 0000:31:00.0: cvl_0_0 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:00.192 Found net devices under 0000:31:00.1: cvl_0_1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.192 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:30:00.193 00:30:00.193 --- 10.0.0.2 ping statistics --- 00:30:00.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.193 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:30:00.193 00:30:00.193 --- 10.0.0.1 ping statistics --- 00:30:00.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.193 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1993645 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1993645 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1993645 ']' 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.193 11:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.193 [2024-10-09 11:10:19.442191] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:00.193 [2024-10-09 11:10:19.442258] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.193 [2024-10-09 11:10:19.584897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:00.193 [2024-10-09 11:10:19.634673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.193 [2024-10-09 11:10:19.657055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.193 [2024-10-09 11:10:19.657090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.193 [2024-10-09 11:10:19.657102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.193 [2024-10-09 11:10:19.657110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.193 [2024-10-09 11:10:19.657115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.193 [2024-10-09 11:10:19.659081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.193 [2024-10-09 11:10:19.659242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.193 [2024-10-09 11:10:19.659400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.193 [2024-10-09 11:10:19.659400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.453 [2024-10-09 11:10:20.283941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.453 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.454 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.454 Malloc1 00:30:00.454 [2024-10-09 11:10:20.408582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.454 Malloc2 00:30:00.714 Malloc3 00:30:00.714 Malloc4 00:30:00.714 Malloc5 00:30:00.714 Malloc6 00:30:00.714 Malloc7 00:30:00.714 Malloc8 00:30:00.714 Malloc9 00:30:00.976 Malloc10 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1994024 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1994024 /var/tmp/bdevperf.sock 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1994024 ']' 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:00.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.976 { 00:30:00.976 "params": { 00:30:00.976 "name": "Nvme$subsystem", 00:30:00.976 "trtype": "$TEST_TRANSPORT", 00:30:00.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.976 "adrfam": "ipv4", 00:30:00.976 "trsvcid": "$NVMF_PORT", 00:30:00.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.976 "hdgst": ${hdgst:-false}, 00:30:00.976 "ddgst": ${ddgst:-false} 00:30:00.976 }, 00:30:00.976 "method": "bdev_nvme_attach_controller" 00:30:00.976 } 00:30:00.976 EOF 00:30:00.976 )") 00:30:00.976 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.976 [2024-10-09 11:10:20.857574] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:00.977 [2024-10-09 11:10:20.857625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.977 { 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme$subsystem", 00:30:00.977 "trtype": "$TEST_TRANSPORT", 00:30:00.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "$NVMF_PORT", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.977 "hdgst": ${hdgst:-false}, 00:30:00.977 "ddgst": ${ddgst:-false} 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 } 00:30:00.977 EOF 00:30:00.977 )") 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.977 { 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme$subsystem", 00:30:00.977 "trtype": "$TEST_TRANSPORT", 00:30:00.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "$NVMF_PORT", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.977 "hdgst": ${hdgst:-false}, 00:30:00.977 "ddgst": ${ddgst:-false} 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 } 00:30:00.977 EOF 00:30:00.977 )") 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.977 { 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme$subsystem", 00:30:00.977 "trtype": "$TEST_TRANSPORT", 00:30:00.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "$NVMF_PORT", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.977 "hdgst": ${hdgst:-false}, 00:30:00.977 "ddgst": ${ddgst:-false} 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 } 00:30:00.977 EOF 00:30:00.977 )") 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:00.977 { 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme$subsystem", 00:30:00.977 "trtype": "$TEST_TRANSPORT", 00:30:00.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "$NVMF_PORT", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:00.977 "hdgst": ${hdgst:-false}, 00:30:00.977 "ddgst": ${ddgst:-false} 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 } 00:30:00.977 EOF 00:30:00.977 )") 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:00.977 11:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme1", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme2", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme3", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme4", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme5", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme6", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme7", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme8", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme9", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 },{ 00:30:00.977 "params": { 00:30:00.977 "name": "Nvme10", 00:30:00.977 "trtype": "tcp", 00:30:00.977 "traddr": "10.0.0.2", 00:30:00.977 "adrfam": "ipv4", 00:30:00.977 "trsvcid": "4420", 00:30:00.977 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:00.977 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:00.977 "hdgst": false, 00:30:00.977 "ddgst": false 00:30:00.977 }, 00:30:00.977 "method": "bdev_nvme_attach_controller" 00:30:00.977 }' 00:30:01.286 [2024-10-09 11:10:20.988993] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:01.286 [2024-10-09 11:10:21.020537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.286 [2024-10-09 11:10:21.038748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.740 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.740 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:30:02.740 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:02.740 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.741 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:02.741 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.741 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1994024 00:30:02.741 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:02.741 11:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:03.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1994024 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1993645 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.683 "adrfam": "ipv4", 00:30:03.683 "trsvcid": "$NVMF_PORT", 00:30:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.683 "hdgst": ${hdgst:-false}, 00:30:03.683 "ddgst": ${ddgst:-false} 00:30:03.683 }, 00:30:03.683 "method": "bdev_nvme_attach_controller" 00:30:03.683 } 00:30:03.683 EOF 00:30:03.683 )") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.683 "adrfam": "ipv4", 00:30:03.683 "trsvcid": "$NVMF_PORT", 00:30:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.683 "hdgst": ${hdgst:-false}, 00:30:03.683 "ddgst": ${ddgst:-false} 00:30:03.683 }, 00:30:03.683 "method": "bdev_nvme_attach_controller" 00:30:03.683 } 00:30:03.683 EOF 00:30:03.683 )") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.683 "adrfam": "ipv4", 00:30:03.683 "trsvcid": "$NVMF_PORT", 00:30:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.683 "hdgst": ${hdgst:-false}, 00:30:03.683 "ddgst": ${ddgst:-false} 00:30:03.683 }, 00:30:03.683 "method": "bdev_nvme_attach_controller" 00:30:03.683 } 00:30:03.683 EOF 00:30:03.683 )") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.683 "adrfam": "ipv4", 00:30:03.683 "trsvcid": "$NVMF_PORT", 00:30:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.683 "hdgst": ${hdgst:-false}, 00:30:03.683 "ddgst": ${ddgst:-false} 00:30:03.683 }, 00:30:03.683 "method": "bdev_nvme_attach_controller" 00:30:03.683 } 00:30:03.683 EOF 00:30:03.683 )") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.683 "adrfam": "ipv4", 00:30:03.683 "trsvcid": "$NVMF_PORT", 00:30:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.683 "hdgst": ${hdgst:-false}, 00:30:03.683 "ddgst": ${ddgst:-false} 00:30:03.683 }, 00:30:03.683 "method": "bdev_nvme_attach_controller" 00:30:03.683 } 00:30:03.683 EOF 00:30:03.683 )") 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.683 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.683 { 00:30:03.683 "params": { 00:30:03.683 "name": "Nvme$subsystem", 00:30:03.683 "trtype": "$TEST_TRANSPORT", 00:30:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "$NVMF_PORT", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.684 "hdgst": ${hdgst:-false}, 00:30:03.684 "ddgst": ${ddgst:-false} 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 } 00:30:03.684 EOF 00:30:03.684 )") 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.684 { 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme$subsystem", 00:30:03.684 "trtype": "$TEST_TRANSPORT", 00:30:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "$NVMF_PORT", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.684 "hdgst": ${hdgst:-false}, 00:30:03.684 "ddgst": ${ddgst:-false} 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 } 00:30:03.684 EOF 00:30:03.684 )") 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.684 [2024-10-09 11:10:23.566676] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:03.684 [2024-10-09 11:10:23.566733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994436 ] 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.684 { 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme$subsystem", 00:30:03.684 "trtype": "$TEST_TRANSPORT", 00:30:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "$NVMF_PORT", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.684 "hdgst": ${hdgst:-false}, 00:30:03.684 "ddgst": ${ddgst:-false} 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 } 00:30:03.684 EOF 00:30:03.684 )") 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.684 { 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme$subsystem", 00:30:03.684 "trtype": "$TEST_TRANSPORT", 00:30:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "$NVMF_PORT", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.684 "hdgst": ${hdgst:-false}, 00:30:03.684 "ddgst": ${ddgst:-false} 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 } 00:30:03.684 EOF 00:30:03.684 )") 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:03.684 { 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme$subsystem", 00:30:03.684 "trtype": "$TEST_TRANSPORT", 00:30:03.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "$NVMF_PORT", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.684 "hdgst": ${hdgst:-false}, 00:30:03.684 "ddgst": ${ddgst:-false} 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 } 00:30:03.684 EOF 00:30:03.684 )") 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:30:03.684 11:10:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme1", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme2", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme3", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme4", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme5", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme6", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme7", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme8", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme9", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 },{ 00:30:03.684 "params": { 00:30:03.684 "name": "Nvme10", 00:30:03.684 "trtype": "tcp", 00:30:03.684 "traddr": "10.0.0.2", 00:30:03.684 "adrfam": "ipv4", 00:30:03.684 "trsvcid": "4420", 00:30:03.684 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:03.684 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:03.684 "hdgst": false, 00:30:03.684 "ddgst": false 00:30:03.684 }, 00:30:03.684 "method": "bdev_nvme_attach_controller" 00:30:03.684 }' 00:30:03.945 [2024-10-09 11:10:23.714515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:03.945 [2024-10-09 11:10:23.745618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.945 [2024-10-09 11:10:23.763494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.329 Running I/O for 1 seconds... 00:30:06.530 1803.00 IOPS, 112.69 MiB/s 00:30:06.530 Latency(us) 00:30:06.530 [2024-10-09T09:10:26.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.530 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme1n1 : 1.12 229.00 14.31 0.00 0.00 276444.29 17736.10 248743.39 00:30:06.530 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme2n1 : 1.05 183.52 11.47 0.00 0.00 338592.00 23429.17 257501.96 00:30:06.530 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme3n1 : 1.12 232.16 14.51 0.00 0.00 252924.47 6651.04 261005.39 00:30:06.530 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme4n1 : 1.11 230.12 14.38 0.00 0.00 260115.42 16203.35 252246.82 00:30:06.530 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme5n1 : 1.11 231.07 14.44 0.00 0.00 254688.57 19706.78 246991.67 00:30:06.530 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme6n1 : 1.13 226.85 14.18 0.00 0.00 255041.60 18064.55 238233.10 00:30:06.530 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme7n1 : 1.13 284.13 17.76 0.00 0.00 199445.90 8211.16 238233.10 00:30:06.530 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme8n1 : 1.18 271.06 16.94 0.00 0.00 206402.29 11769.33 248743.39 00:30:06.530 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme9n1 : 1.19 268.73 16.80 0.00 0.00 204689.75 9634.43 269763.96 00:30:06.530 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:06.530 Verification LBA range: start 0x0 length 0x400 00:30:06.530 Nvme10n1 : 1.20 267.21 16.70 0.00 0.00 202218.53 8484.86 271515.67 00:30:06.530 [2024-10-09T09:10:26.532Z] =================================================================================================================== 00:30:06.530 [2024-10-09T09:10:26.532Z] Total : 2423.84 151.49 0.00 0.00 239011.69 6651.04 271515.67 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.530 rmmod nvme_tcp 00:30:06.530 rmmod nvme_fabrics 00:30:06.530 rmmod nvme_keyring 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1993645 ']' 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1993645 00:30:06.530 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1993645 ']' 00:30:06.531 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1993645 00:30:06.531 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:30:06.531 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:06.531 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1993645 00:30:06.791 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:06.791 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:06.791 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1993645' 00:30:06.791 killing process with pid 1993645 00:30:06.791 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1993645 00:30:06.791 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1993645 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.052 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.964 00:30:08.964 real 0m16.938s 00:30:08.964 user 0m34.738s 00:30:08.964 sys 0m6.664s 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:08.964 ************************************ 00:30:08.964 END TEST nvmf_shutdown_tc1 00:30:08.964 ************************************ 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:08.964 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:09.225 ************************************ 00:30:09.225 START TEST nvmf_shutdown_tc2 00:30:09.225 ************************************ 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.225 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:09.226 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:09.226 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:09.226 Found net devices under 0000:31:00.0: cvl_0_0 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:09.226 Found net devices under 0000:31:00.1: cvl_0_1 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:09.226 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.226 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:30:09.488 00:30:09.488 --- 10.0.0.2 ping statistics --- 00:30:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.488 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:30:09.488 00:30:09.488 --- 10.0.0.1 ping statistics --- 00:30:09.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.488 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1995789 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1995789 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1995789 ']' 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:09.488 11:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.488 [2024-10-09 11:10:29.388791] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:09.488 [2024-10-09 11:10:29.388837] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.749 [2024-10-09 11:10:29.520115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:09.749 [2024-10-09 11:10:29.566502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.749 [2024-10-09 11:10:29.589834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.749 [2024-10-09 11:10:29.589872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.749 [2024-10-09 11:10:29.589878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.749 [2024-10-09 11:10:29.589883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.749 [2024-10-09 11:10:29.589887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.749 [2024-10-09 11:10:29.591762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.749 [2024-10-09 11:10:29.591926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.749 [2024-10-09 11:10:29.592088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.749 [2024-10-09 11:10:29.592090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.320 [2024-10-09 11:10:30.263842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.320 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.581 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:10.581 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:10.581 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:10.581 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.581 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.581 Malloc1 00:30:10.581 [2024-10-09 11:10:30.379045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.581 Malloc2 00:30:10.581 Malloc3 00:30:10.581 Malloc4 00:30:10.581 Malloc5 00:30:10.581 Malloc6 00:30:10.843 Malloc7 00:30:10.843 Malloc8 00:30:10.843 Malloc9 00:30:10.843 Malloc10 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1996005 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1996005 /var/tmp/bdevperf.sock 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1996005 ']' 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 [2024-10-09 11:10:30.823899] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:10.843 [2024-10-09 11:10:30.823954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996005 ] 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.843 "hdgst": ${hdgst:-false}, 00:30:10.843 "ddgst": ${ddgst:-false} 00:30:10.843 }, 00:30:10.843 "method": "bdev_nvme_attach_controller" 00:30:10.843 } 00:30:10.843 EOF 00:30:10.843 )") 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:10.843 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:10.843 { 00:30:10.843 "params": { 00:30:10.843 "name": "Nvme$subsystem", 00:30:10.843 "trtype": "$TEST_TRANSPORT", 00:30:10.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.843 "adrfam": "ipv4", 00:30:10.843 "trsvcid": "$NVMF_PORT", 00:30:10.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.844 "hdgst": ${hdgst:-false}, 00:30:10.844 "ddgst": ${ddgst:-false} 00:30:10.844 }, 00:30:10.844 "method": "bdev_nvme_attach_controller" 00:30:10.844 } 00:30:10.844 EOF 00:30:10.844 )") 00:30:10.844 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:11.105 { 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme$subsystem", 00:30:11.105 "trtype": "$TEST_TRANSPORT", 00:30:11.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "$NVMF_PORT", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.105 "hdgst": ${hdgst:-false}, 00:30:11.105 "ddgst": ${ddgst:-false} 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 } 00:30:11.105 EOF 00:30:11.105 )") 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:30:11.105 11:10:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme1", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme2", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme3", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme4", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme5", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme6", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme7", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme8", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme9", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 },{ 00:30:11.105 "params": { 00:30:11.105 "name": "Nvme10", 00:30:11.105 "trtype": "tcp", 00:30:11.105 "traddr": "10.0.0.2", 00:30:11.105 "adrfam": "ipv4", 00:30:11.105 "trsvcid": "4420", 00:30:11.105 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:11.105 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:11.105 "hdgst": false, 00:30:11.105 "ddgst": false 00:30:11.105 }, 00:30:11.105 "method": "bdev_nvme_attach_controller" 00:30:11.105 }' 00:30:11.105 [2024-10-09 11:10:30.954988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.105 [2024-10-09 11:10:30.986663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.105 [2024-10-09 11:10:31.004962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.487 Running I/O for 10 seconds... 00:30:12.487 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.487 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:30:12.487 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:12.487 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.487 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:12.747 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:13.007 11:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.267 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1996005 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1996005 ']' 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1996005 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996005 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996005' 00:30:13.528 killing process with pid 1996005 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1996005 00:30:13.528 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1996005 00:30:13.528 Received shutdown signal, test time was about 0.979205 seconds 00:30:13.528 00:30:13.528 Latency(us) 00:30:13.528 [2024-10-09T09:10:33.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.528 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme1n1 : 0.96 200.95 12.56 0.00 0.00 314648.49 24195.55 257501.96 00:30:13.528 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme2n1 : 0.98 262.41 16.40 0.00 0.00 236093.71 17517.14 229474.53 00:30:13.528 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme3n1 : 0.98 261.68 16.35 0.00 0.00 231966.11 14451.64 290784.52 00:30:13.528 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme4n1 : 0.96 266.91 16.68 0.00 0.00 222320.78 11276.66 252246.82 00:30:13.528 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme5n1 : 0.95 203.12 12.69 0.00 0.00 285409.92 27480.01 248743.39 00:30:13.528 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme6n1 : 0.95 202.43 12.65 0.00 0.00 279719.13 15327.50 248743.39 00:30:13.528 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme7n1 : 0.97 264.74 16.55 0.00 0.00 209455.98 19487.82 243488.25 00:30:13.528 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme8n1 : 0.97 264.48 16.53 0.00 0.00 204887.78 13192.60 262757.10 00:30:13.528 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme9n1 : 0.97 268.28 16.77 0.00 0.00 197272.03 2381.24 210205.68 00:30:13.528 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.528 Verification LBA range: start 0x0 length 0x400 00:30:13.528 Nvme10n1 : 0.96 199.72 12.48 0.00 0.00 258481.02 22772.28 264508.81 00:30:13.528 [2024-10-09T09:10:33.530Z] =================================================================================================================== 00:30:13.528 [2024-10-09T09:10:33.530Z] Total : 2394.72 149.67 0.00 0.00 239429.66 2381.24 290784.52 00:30:13.788 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.729 rmmod nvme_tcp 00:30:14.729 rmmod nvme_fabrics 00:30:14.729 rmmod nvme_keyring 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1995789 ']' 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1995789 ']' 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1995789' 00:30:14.729 killing process with pid 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1995789 00:30:14.729 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1995789 00:30:14.989 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.990 11:10:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.533 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:17.533 00:30:17.533 real 0m8.023s 00:30:17.533 user 0m24.122s 00:30:17.533 sys 0m1.257s 00:30:17.533 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.533 11:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.533 ************************************ 00:30:17.533 END TEST nvmf_shutdown_tc2 00:30:17.533 ************************************ 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:17.533 ************************************ 00:30:17.533 START TEST nvmf_shutdown_tc3 00:30:17.533 ************************************ 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.533 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:17.534 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:17.534 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:17.534 Found net devices under 0000:31:00.0: cvl_0_0 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:17.534 Found net devices under 0000:31:00.1: cvl_0_1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:30:17.534 00:30:17.534 --- 10.0.0.2 ping statistics --- 00:30:17.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.534 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:30:17.534 00:30:17.534 --- 10.0.0.1 ping statistics --- 00:30:17.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.534 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1997360 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1997360 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1997360 ']' 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.534 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:17.535 11:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:17.535 [2024-10-09 11:10:37.527116] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:17.535 [2024-10-09 11:10:37.527179] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.795 [2024-10-09 11:10:37.668921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:17.795 [2024-10-09 11:10:37.716500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.795 [2024-10-09 11:10:37.740308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.795 [2024-10-09 11:10:37.740349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.795 [2024-10-09 11:10:37.740355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.795 [2024-10-09 11:10:37.740361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.795 [2024-10-09 11:10:37.740366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.795 [2024-10-09 11:10:37.742252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.795 [2024-10-09 11:10:37.742414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.795 [2024-10-09 11:10:37.742575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.795 [2024-10-09 11:10:37.742577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.366 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:18.366 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:18.366 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:18.366 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.366 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:18.625 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.625 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.625 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.625 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:18.626 [2024-10-09 11:10:38.388048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.626 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:18.626 Malloc1 00:30:18.626 [2024-10-09 11:10:38.496391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.626 Malloc2 00:30:18.626 Malloc3 00:30:18.626 Malloc4 00:30:18.626 Malloc5 00:30:18.886 Malloc6 00:30:18.886 Malloc7 00:30:18.886 Malloc8 00:30:18.886 Malloc9 00:30:18.886 Malloc10 00:30:18.886 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.886 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:18.886 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.886 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1997743 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1997743 /var/tmp/bdevperf.sock 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1997743 ']' 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.147 { 00:30:19.147 "params": { 00:30:19.147 "name": "Nvme$subsystem", 00:30:19.147 "trtype": "$TEST_TRANSPORT", 00:30:19.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.147 "adrfam": "ipv4", 00:30:19.147 "trsvcid": "$NVMF_PORT", 00:30:19.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.147 "hdgst": ${hdgst:-false}, 00:30:19.147 "ddgst": ${ddgst:-false} 00:30:19.147 }, 00:30:19.147 "method": "bdev_nvme_attach_controller" 00:30:19.147 } 00:30:19.147 EOF 00:30:19.147 )") 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.147 { 00:30:19.147 "params": { 00:30:19.147 "name": "Nvme$subsystem", 00:30:19.147 "trtype": "$TEST_TRANSPORT", 00:30:19.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.147 "adrfam": "ipv4", 00:30:19.147 "trsvcid": "$NVMF_PORT", 00:30:19.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.147 "hdgst": ${hdgst:-false}, 00:30:19.147 "ddgst": ${ddgst:-false} 00:30:19.147 }, 00:30:19.147 "method": "bdev_nvme_attach_controller" 00:30:19.147 } 00:30:19.147 EOF 00:30:19.147 )") 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.147 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 [2024-10-09 11:10:38.950634] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:19.148 [2024-10-09 11:10:38.950691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997743 ] 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.148 { 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme$subsystem", 00:30:19.148 "trtype": "$TEST_TRANSPORT", 00:30:19.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "$NVMF_PORT", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.148 "hdgst": ${hdgst:-false}, 00:30:19.148 "ddgst": ${ddgst:-false} 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 } 00:30:19.148 EOF 00:30:19.148 )") 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:30:19.148 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme1", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme2", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme3", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme4", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme5", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme6", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme7", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme8", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme9", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 },{ 00:30:19.148 "params": { 00:30:19.148 "name": "Nvme10", 00:30:19.148 "trtype": "tcp", 00:30:19.148 "traddr": "10.0.0.2", 00:30:19.148 "adrfam": "ipv4", 00:30:19.148 "trsvcid": "4420", 00:30:19.148 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:19.148 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:19.148 "hdgst": false, 00:30:19.148 "ddgst": false 00:30:19.148 }, 00:30:19.148 "method": "bdev_nvme_attach_controller" 00:30:19.148 }' 00:30:19.148 [2024-10-09 11:10:39.081831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:19.148 [2024-10-09 11:10:39.113265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.148 [2024-10-09 11:10:39.131492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.060 Running I/O for 10 seconds... 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:21.060 11:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:21.320 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:21.585 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:21.585 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:21.585 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.585 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.585 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1997360 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1997360 ']' 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1997360 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1997360 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1997360' 00:30:21.586 killing process with pid 1997360 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1997360 00:30:21.586 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1997360 00:30:21.586 [2024-10-09 11:10:41.551119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.551460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592df0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.586 [2024-10-09 11:10:41.553764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.553998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241fce0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.554996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.587 [2024-10-09 11:10:41.555093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24201d0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.555987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.588 [2024-10-09 11:10:41.556155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24206a0 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.556998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.557291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420b70 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.589 [2024-10-09 11:10:41.558384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.558618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2421530 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.590 [2024-10-09 11:10:41.559221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.559278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.591 [2024-10-09 11:10:41.560846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.560984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.560994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.591 [2024-10-09 11:10:41.561472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.591 [2024-10-09 11:10:41.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.561965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.592 [2024-10-09 11:10:41.561972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.592 [2024-10-09 11:10:41.562002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.592 [2024-10-09 11:10:41.562045] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2769f40 was disconnected and freed. reset controller. 00:30:21.592 [2024-10-09 11:10:41.569479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.592 [2024-10-09 11:10:41.569601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2592920 is same with the state(6) to be set 00:30:21.864 [2024-10-09 11:10:41.583009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2562190 is same with the state(6) to be set 00:30:21.864 [2024-10-09 11:10:41.583131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564490 is same with the state(6) to be set 00:30:21.864 [2024-10-09 11:10:41.583218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29be8f0 is same with the state(6) to be set 00:30:21.864 [2024-10-09 11:10:41.583309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.864 [2024-10-09 11:10:41.583321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.864 [2024-10-09 11:10:41.583334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298eca0 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e610 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2562c00 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25648f0 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29d9fb0 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29a1a10 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.583882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.865 [2024-10-09 11:10:41.583942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.583950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2985690 is same with the state(6) to be set 00:30:21.865 [2024-10-09 11:10:41.584002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.865 [2024-10-09 11:10:41.584211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.865 [2024-10-09 11:10:41.584219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.866 [2024-10-09 11:10:41.584955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.866 [2024-10-09 11:10:41.584963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.584973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.584980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.584991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.584998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.585139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.585147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2768cc0 is same with the state(6) to be set 00:30:21.867 [2024-10-09 11:10:41.585194] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2768cc0 was disconnected and freed. reset controller. 00:30:21.867 [2024-10-09 11:10:41.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.586987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.586994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.867 [2024-10-09 11:10:41.587174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.867 [2024-10-09 11:10:41.587183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.868 [2024-10-09 11:10:41.587783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.868 [2024-10-09 11:10:41.587791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2965760 is same with the state(6) to be set 00:30:21.868 [2024-10-09 11:10:41.587838] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2965760 was disconnected and freed. reset controller. 00:30:21.868 [2024-10-09 11:10:41.587987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:21.868 [2024-10-09 11:10:41.588012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2564490 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.590647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.869 [2024-10-09 11:10:41.590677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25648f0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.591263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:21.869 [2024-10-09 11:10:41.591289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246e610 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.591773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.591814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2564490 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.591827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564490 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.592179] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592225] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592264] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592303] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592603] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592651] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.592689] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:21.869 [2024-10-09 11:10:41.593082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.593098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25648f0 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.593107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25648f0 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.593129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2564490 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.593530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246e610 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.593539] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e610 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.593552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25648f0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:21.869 [2024-10-09 11:10:41.593569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:21.869 [2024-10-09 11:10:41.593578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:21.869 [2024-10-09 11:10:41.593597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562190 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29be8f0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298eca0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562c00 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29d9fb0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29a1a10 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2985690 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.869 [2024-10-09 11:10:41.593815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246e610 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.593824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.869 [2024-10-09 11:10:41.593831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.869 [2024-10-09 11:10:41.593838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.869 [2024-10-09 11:10:41.593879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.869 [2024-10-09 11:10:41.593887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:21.869 [2024-10-09 11:10:41.593893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:21.869 [2024-10-09 11:10:41.593900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:21.869 [2024-10-09 11:10:41.593942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.869 [2024-10-09 11:10:41.600786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:21.869 [2024-10-09 11:10:41.601213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.601226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2564490 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.601234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564490 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.601274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2564490 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.601312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:21.869 [2024-10-09 11:10:41.601319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:21.869 [2024-10-09 11:10:41.601326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:21.869 [2024-10-09 11:10:41.601367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.869 [2024-10-09 11:10:41.601951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.869 [2024-10-09 11:10:41.602333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.602346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25648f0 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.602354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25648f0 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.602393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25648f0 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.602431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.869 [2024-10-09 11:10:41.602438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.869 [2024-10-09 11:10:41.602445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.869 [2024-10-09 11:10:41.602491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.869 [2024-10-09 11:10:41.603213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:21.869 [2024-10-09 11:10:41.603749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.869 [2024-10-09 11:10:41.603789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246e610 with addr=10.0.0.2, port=4420 00:30:21.869 [2024-10-09 11:10:41.603801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e610 is same with the state(6) to be set 00:30:21.869 [2024-10-09 11:10:41.603930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246e610 (9): Bad file descriptor 00:30:21.869 [2024-10-09 11:10:41.603985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.603998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.869 [2024-10-09 11:10:41.604243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.869 [2024-10-09 11:10:41.604252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.870 [2024-10-09 11:10:41.604982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.870 [2024-10-09 11:10:41.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.605118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.605127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2954370 is same with the state(6) to be set 00:30:21.871 [2024-10-09 11:10:41.606430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.606989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.606997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.607007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.607014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.607024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.871 [2024-10-09 11:10:41.607040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.871 [2024-10-09 11:10:41.607048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.607579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.607589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2961740 is same with the state(6) to be set 00:30:21.872 [2024-10-09 11:10:41.608857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.608991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.608999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.609008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.609016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.609026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.609033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.609044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.609052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.609062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.872 [2024-10-09 11:10:41.609070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.872 [2024-10-09 11:10:41.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.873 [2024-10-09 11:10:41.609739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.873 [2024-10-09 11:10:41.609746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.609989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.609999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.610006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.610014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2962c70 is same with the state(6) to be set 00:30:21.874 [2024-10-09 11:10:41.611284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.874 [2024-10-09 11:10:41.611744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.874 [2024-10-09 11:10:41.611754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.611982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.611993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.612399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.612408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29641a0 is same with the state(6) to be set 00:30:21.875 [2024-10-09 11:10:41.613690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.613704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.613716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.613723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.613733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.613741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.875 [2024-10-09 11:10:41.613751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.875 [2024-10-09 11:10:41.613759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.613979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.613986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.614300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.614310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.876 [2024-10-09 11:10:41.620935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.876 [2024-10-09 11:10:41.620943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.620958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.620966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.620975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.620983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.620993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.621262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.621271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2966ce0 is same with the state(6) to be set 00:30:21.877 [2024-10-09 11:10:41.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.877 [2024-10-09 11:10:41.622966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.877 [2024-10-09 11:10:41.622974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.622986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.622996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.878 [2024-10-09 11:10:41.623706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.878 [2024-10-09 11:10:41.623714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.623724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.623732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.623741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.623748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.623759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.623766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.623774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2968120 is same with the state(6) to be set 00:30:21.879 [2024-10-09 11:10:41.625056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.879 [2024-10-09 11:10:41.625755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.879 [2024-10-09 11:10:41.625763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.625986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.625994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.880 [2024-10-09 11:10:41.626207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.880 [2024-10-09 11:10:41.626215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29696a0 is same with the state(6) to be set 00:30:21.880 [2024-10-09 11:10:41.629124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:21.880 [2024-10-09 11:10:41.629163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:21.880 [2024-10-09 11:10:41.629176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:21.880 [2024-10-09 11:10:41.629186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:21.880 [2024-10-09 11:10:41.629230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:21.880 [2024-10-09 11:10:41.629239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:21.880 [2024-10-09 11:10:41.629249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:21.880 [2024-10-09 11:10:41.629312] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.880 [2024-10-09 11:10:41.629330] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.880 [2024-10-09 11:10:41.629344] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.880 [2024-10-09 11:10:41.629356] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.880 [2024-10-09 11:10:41.646185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:21.880 [2024-10-09 11:10:41.646213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:21.880 task offset: 29824 on job bdev=Nvme2n1 fails 00:30:21.880 00:30:21.880 Latency(us) 00:30:21.880 [2024-10-09T09:10:41.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme1n1 ended in about 0.97 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme1n1 : 0.97 198.49 12.41 66.16 0.00 239149.16 19049.89 246991.67 00:30:21.880 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme2n1 ended in about 0.96 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme2n1 : 0.96 199.07 12.44 66.36 0.00 233570.28 17079.21 253998.53 00:30:21.880 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme3n1 ended in about 0.98 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme3n1 : 0.98 195.04 12.19 65.01 0.00 233666.29 14999.05 227722.82 00:30:21.880 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme4n1 ended in about 0.99 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme4n1 : 0.99 199.62 12.48 64.85 0.00 225072.81 16531.80 250495.10 00:30:21.880 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme5n1 ended in about 0.99 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme5n1 : 0.99 129.38 8.09 64.69 0.00 300394.15 19925.75 264508.81 00:30:21.880 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme6n1 ended in about 0.99 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme6n1 : 0.99 193.60 12.10 64.53 0.00 220920.82 20473.16 248743.39 00:30:21.880 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme7n1 ended in about 0.97 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme7n1 : 0.97 198.22 12.39 66.07 0.00 210337.56 6924.74 253998.53 00:30:21.880 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme8n1 ended in about 1.00 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme8n1 : 1.00 191.88 11.99 63.96 0.00 213451.65 12754.67 248743.39 00:30:21.880 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.880 Job: Nvme9n1 ended in about 1.00 seconds with error 00:30:21.880 Verification LBA range: start 0x0 length 0x400 00:30:21.880 Nvme9n1 : 1.00 127.61 7.98 63.80 0.00 279010.06 16312.84 271515.67 00:30:21.880 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.881 Job: Nvme10n1 ended in about 1.01 seconds with error 00:30:21.881 Verification LBA range: start 0x0 length 0x400 00:30:21.881 Nvme10n1 : 1.01 127.30 7.96 63.65 0.00 273490.34 19597.30 248743.39 00:30:21.881 [2024-10-09T09:10:41.883Z] =================================================================================================================== 00:30:21.881 [2024-10-09T09:10:41.883Z] Total : 1760.21 110.01 649.10 0.00 239519.71 6924.74 271515.67 00:30:21.881 [2024-10-09 11:10:41.670852] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:21.881 [2024-10-09 11:10:41.670882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:21.881 [2024-10-09 11:10:41.670896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.671238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.671259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2562190 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.671270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2562190 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.671493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.671506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x298eca0 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.671513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x298eca0 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.671823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.671833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2562c00 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.671841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2562c00 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.672152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.672162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2985690 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.672170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2985690 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.672193] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.672205] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.672232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2985690 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.672250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562c00 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.672263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x298eca0 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.672276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2562190 (9): Bad file descriptor 00:30:21.881 1760.21 IOPS, 110.01 MiB/s [2024-10-09T09:10:41.883Z] [2024-10-09 11:10:41.674113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:21.881 [2024-10-09 11:10:41.674455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.674474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29a1a10 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.674483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29a1a10 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.674773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.674784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29be8f0 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.674792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29be8f0 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.675145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.675162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29d9fb0 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.675169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29d9fb0 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.675193] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675205] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675216] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675226] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675239] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675249] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:21.881 [2024-10-09 11:10:41.675316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:21.881 [2024-10-09 11:10:41.675327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:21.881 [2024-10-09 11:10:41.675689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.675703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2564490 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.675711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2564490 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.675721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29a1a10 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.675732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29be8f0 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.675741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29d9fb0 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.675750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.675756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.675764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:21.881 [2024-10-09 11:10:41.675775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.675782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.675789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:21.881 [2024-10-09 11:10:41.675800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.675806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.675813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:21.881 [2024-10-09 11:10:41.675823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.675830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.675838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:21.881 [2024-10-09 11:10:41.675911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.675920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.675930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.675937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.676246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.676257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25648f0 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.676265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25648f0 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.676498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.881 [2024-10-09 11:10:41.676511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x246e610 with addr=10.0.0.2, port=4420 00:30:21.881 [2024-10-09 11:10:41.676519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x246e610 is same with the state(6) to be set 00:30:21.881 [2024-10-09 11:10:41.676529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2564490 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.676537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.676544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.676551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:21.881 [2024-10-09 11:10:41.676562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.676568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.676575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:21.881 [2024-10-09 11:10:41.676584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:21.881 [2024-10-09 11:10:41.676591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:21.881 [2024-10-09 11:10:41.676598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:21.881 [2024-10-09 11:10:41.676628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.676636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.676642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.881 [2024-10-09 11:10:41.676650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25648f0 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.676660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246e610 (9): Bad file descriptor 00:30:21.881 [2024-10-09 11:10:41.676669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:21.882 [2024-10-09 11:10:41.676676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:21.882 [2024-10-09 11:10:41.676682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:21.882 [2024-10-09 11:10:41.676712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.882 [2024-10-09 11:10:41.676719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:21.882 [2024-10-09 11:10:41.676726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:21.882 [2024-10-09 11:10:41.676733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:21.882 [2024-10-09 11:10:41.676746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:21.882 [2024-10-09 11:10:41.676753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:21.882 [2024-10-09 11:10:41.676760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:21.882 [2024-10-09 11:10:41.676787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.882 [2024-10-09 11:10:41.676795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:21.882 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1997743 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1997743 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1997743 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:22.822 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.083 rmmod nvme_tcp 00:30:23.083 rmmod nvme_fabrics 00:30:23.083 rmmod nvme_keyring 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1997360 ']' 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1997360 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1997360 ']' 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1997360 00:30:23.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1997360) - No such process 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1997360 is not found' 00:30:23.083 Process with pid 1997360 is not found 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.083 11:10:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.992 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.992 00:30:24.992 real 0m7.895s 00:30:24.992 user 0m19.173s 00:30:24.992 sys 0m1.251s 00:30:24.992 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:24.992 11:10:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:24.992 ************************************ 00:30:24.992 END TEST nvmf_shutdown_tc3 00:30:24.992 ************************************ 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.253 ************************************ 00:30:25.253 START TEST nvmf_shutdown_tc4 00:30:25.253 ************************************ 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:25.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:25.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.253 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:25.254 Found net devices under 0000:31:00.0: cvl_0_0 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:25.254 Found net devices under 0000:31:00.1: cvl_0_1 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.254 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:30:25.514 00:30:25.514 --- 10.0.0.2 ping statistics --- 00:30:25.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.514 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:30:25.514 00:30:25.514 --- 10.0.0.1 ping statistics --- 00:30:25.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.514 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1999163 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1999163 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1999163 ']' 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.514 11:10:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:25.774 [2024-10-09 11:10:45.521380] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:25.774 [2024-10-09 11:10:45.521432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.774 [2024-10-09 11:10:45.660410] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:25.774 [2024-10-09 11:10:45.706418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:25.774 [2024-10-09 11:10:45.729635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.774 [2024-10-09 11:10:45.729674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.774 [2024-10-09 11:10:45.729680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.774 [2024-10-09 11:10:45.729685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.774 [2024-10-09 11:10:45.729689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.774 [2024-10-09 11:10:45.731209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.774 [2024-10-09 11:10:45.731368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.774 [2024-10-09 11:10:45.731571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:25.774 [2024-10-09 11:10:45.731754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.372 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:26.372 [2024-10-09 11:10:46.371362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.632 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:26.632 Malloc1 00:30:26.632 [2024-10-09 11:10:46.481872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.632 Malloc2 00:30:26.632 Malloc3 00:30:26.632 Malloc4 00:30:26.632 Malloc5 00:30:26.892 Malloc6 00:30:26.892 Malloc7 00:30:26.892 Malloc8 00:30:26.892 Malloc9 00:30:26.892 Malloc10 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1999376 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:26.892 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:27.153 [2024-10-09 11:10:47.038555] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1999163 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1999163 ']' 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1999163 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1999163 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1999163' 00:30:32.442 killing process with pid 1999163 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1999163 00:30:32.442 11:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1999163 00:30:32.442 [2024-10-09 11:10:51.957356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19751e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975b80 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975b80 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975b80 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.957881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975b80 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.958372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974d10 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.958404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974d10 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.958410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974d10 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.958416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1974d10 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e1220 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with Write completed with error (sct=0, sc=8) 00:30:32.442 the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.960840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.960856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702350 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 starting I/O failed: -6 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.960897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d50 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.960914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d50 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.960920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0d50 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 starting I/O failed: -6 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 starting I/O failed: -6 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 starting I/O failed: -6 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 starting I/O failed: -6 00:30:32.442 [2024-10-09 11:10:51.961158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with Write completed with error (sct=0, sc=8) 00:30:32.442 the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 starting I/O failed: -6 00:30:32.442 [2024-10-09 11:10:51.961221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 [2024-10-09 11:10:51.961242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.442 Write completed with error (sct=0, sc=8) 00:30:32.442 [2024-10-09 11:10:51.961247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17010e0 is same with the state(6) to be set 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 [2024-10-09 11:10:51.961392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.443 [2024-10-09 11:10:51.961483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17015d0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.961796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1701ac0 is same with the state(6) to be set 00:30:32.443 starting I/O failed: -6 00:30:32.443 starting I/O failed: -6 00:30:32.443 [2024-10-09 11:10:51.962028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 [2024-10-09 11:10:51.962077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700bf0 is same with the state(6) to be set 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.443 Write completed with error (sct=0, sc=8) 00:30:32.443 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 [2024-10-09 11:10:51.964656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.444 NVMe io qpair process completion error 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 [2024-10-09 11:10:51.965973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.444 starting I/O failed: -6 00:30:32.444 starting I/O failed: -6 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 [2024-10-09 11:10:51.966909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.444 starting I/O failed: -6 00:30:32.444 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 [2024-10-09 11:10:51.967852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 [2024-10-09 11:10:51.969475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.445 NVMe io qpair process completion error 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 starting I/O failed: -6 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.445 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 [2024-10-09 11:10:51.970758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 [2024-10-09 11:10:51.971557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 [2024-10-09 11:10:51.972478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.446 Write completed with error (sct=0, sc=8) 00:30:32.446 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 [2024-10-09 11:10:51.977998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.447 NVMe io qpair process completion error 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 [2024-10-09 11:10:51.979222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 [2024-10-09 11:10:51.980021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.447 starting I/O failed: -6 00:30:32.447 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 [2024-10-09 11:10:51.980947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 [2024-10-09 11:10:51.983057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.448 NVMe io qpair process completion error 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 [2024-10-09 11:10:51.984073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 starting I/O failed: -6 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.448 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 [2024-10-09 11:10:51.984923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 [2024-10-09 11:10:51.985860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.449 Write completed with error (sct=0, sc=8) 00:30:32.449 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 [2024-10-09 11:10:51.990057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.450 NVMe io qpair process completion error 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 [2024-10-09 11:10:51.991363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 [2024-10-09 11:10:51.992200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 [2024-10-09 11:10:51.993150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.450 starting I/O failed: -6 00:30:32.450 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 [2024-10-09 11:10:51.994841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.451 NVMe io qpair process completion error 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 [2024-10-09 11:10:51.995877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 starting I/O failed: -6 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.451 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 [2024-10-09 11:10:51.996752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 [2024-10-09 11:10:51.997662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 [2024-10-09 11:10:52.000932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.452 NVMe io qpair process completion error 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 starting I/O failed: -6 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.452 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 [2024-10-09 11:10:52.002206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 [2024-10-09 11:10:52.003009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 [2024-10-09 11:10:52.003952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.453 starting I/O failed: -6 00:30:32.453 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 [2024-10-09 11:10:52.005851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.454 NVMe io qpair process completion error 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 [2024-10-09 11:10:52.006833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 [2024-10-09 11:10:52.007675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.454 starting I/O failed: -6 00:30:32.454 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 [2024-10-09 11:10:52.008608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 [2024-10-09 11:10:52.011377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.455 NVMe io qpair process completion error 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.455 starting I/O failed: -6 00:30:32.455 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 [2024-10-09 11:10:52.012861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 [2024-10-09 11:10:52.013675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 [2024-10-09 11:10:52.014622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.456 starting I/O failed: -6 00:30:32.456 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 Write completed with error (sct=0, sc=8) 00:30:32.457 starting I/O failed: -6 00:30:32.457 [2024-10-09 11:10:52.016264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.457 NVMe io qpair process completion error 00:30:32.457 Initializing NVMe Controllers 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:32.457 Controller IO queue size 128, less than required. 00:30:32.457 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:32.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:32.457 Initialization complete. Launching workers. 00:30:32.457 ======================================================== 00:30:32.457 Latency(us) 00:30:32.457 Device Information : IOPS MiB/s Average min max 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1896.85 81.51 67496.45 843.47 125270.97 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1873.79 80.51 68370.97 632.28 151933.70 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1858.78 79.87 68107.41 605.15 123735.90 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1853.77 79.65 68314.42 561.65 121527.24 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1847.46 79.38 68572.03 817.40 121250.03 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1902.07 81.73 66684.50 820.21 121260.83 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1902.29 81.74 66701.10 651.45 119502.28 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1893.81 81.37 67062.55 580.10 120465.73 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1858.34 79.85 68363.60 925.62 119813.90 00:30:32.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1865.09 80.14 68164.62 805.04 137789.47 00:30:32.457 ======================================================== 00:30:32.457 Total : 18752.25 805.76 67776.75 561.65 151933.70 00:30:32.457 00:30:32.457 [2024-10-09 11:10:52.019016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc50cd0 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a590 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a3b0 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33a30 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37fb0 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc37dd0 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35060 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d30 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc50af0 is same with the state(6) to be set 00:30:32.457 [2024-10-09 11:10:52.019306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc33850 is same with the state(6) to be set 00:30:32.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:32.457 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1999376 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1999376 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.396 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1999376 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.397 rmmod nvme_tcp 00:30:33.397 rmmod nvme_fabrics 00:30:33.397 rmmod nvme_keyring 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1999163 ']' 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1999163 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1999163 ']' 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1999163 00:30:33.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1999163) - No such process 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1999163 is not found' 00:30:33.397 Process with pid 1999163 is not found 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.397 11:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.943 00:30:35.943 real 0m10.285s 00:30:35.943 user 0m27.493s 00:30:35.943 sys 0m4.008s 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:35.943 ************************************ 00:30:35.943 END TEST nvmf_shutdown_tc4 00:30:35.943 ************************************ 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:35.943 00:30:35.943 real 0m43.717s 00:30:35.943 user 1m45.789s 00:30:35.943 sys 0m13.525s 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:35.943 ************************************ 00:30:35.943 END TEST nvmf_shutdown 00:30:35.943 ************************************ 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:35.943 00:30:35.943 real 19m49.331s 00:30:35.943 user 52m1.450s 00:30:35.943 sys 4m45.296s 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.943 11:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:35.943 ************************************ 00:30:35.943 END TEST nvmf_target_extra 00:30:35.943 ************************************ 00:30:35.943 11:10:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:35.943 11:10:55 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:35.943 11:10:55 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:35.943 11:10:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.943 ************************************ 00:30:35.943 START TEST nvmf_host 00:30:35.943 ************************************ 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:35.943 * Looking for test storage... 00:30:35.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.943 --rc genhtml_branch_coverage=1 00:30:35.943 --rc genhtml_function_coverage=1 00:30:35.943 --rc genhtml_legend=1 00:30:35.943 --rc geninfo_all_blocks=1 00:30:35.943 --rc geninfo_unexecuted_blocks=1 00:30:35.943 00:30:35.943 ' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.943 --rc genhtml_branch_coverage=1 00:30:35.943 --rc genhtml_function_coverage=1 00:30:35.943 --rc genhtml_legend=1 00:30:35.943 --rc geninfo_all_blocks=1 00:30:35.943 --rc geninfo_unexecuted_blocks=1 00:30:35.943 00:30:35.943 ' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.943 --rc genhtml_branch_coverage=1 00:30:35.943 --rc genhtml_function_coverage=1 00:30:35.943 --rc genhtml_legend=1 00:30:35.943 --rc geninfo_all_blocks=1 00:30:35.943 --rc geninfo_unexecuted_blocks=1 00:30:35.943 00:30:35.943 ' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.943 --rc genhtml_branch_coverage=1 00:30:35.943 --rc genhtml_function_coverage=1 00:30:35.943 --rc genhtml_legend=1 00:30:35.943 --rc geninfo_all_blocks=1 00:30:35.943 --rc geninfo_unexecuted_blocks=1 00:30:35.943 00:30:35.943 ' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.943 11:10:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:35.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.944 ************************************ 00:30:35.944 START TEST nvmf_multicontroller 00:30:35.944 ************************************ 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:35.944 * Looking for test storage... 00:30:35.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:35.944 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.205 --rc genhtml_branch_coverage=1 00:30:36.205 --rc genhtml_function_coverage=1 00:30:36.205 --rc genhtml_legend=1 00:30:36.205 --rc geninfo_all_blocks=1 00:30:36.205 --rc geninfo_unexecuted_blocks=1 00:30:36.205 00:30:36.205 ' 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.205 --rc genhtml_branch_coverage=1 00:30:36.205 --rc genhtml_function_coverage=1 00:30:36.205 --rc genhtml_legend=1 00:30:36.205 --rc geninfo_all_blocks=1 00:30:36.205 --rc geninfo_unexecuted_blocks=1 00:30:36.205 00:30:36.205 ' 00:30:36.205 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.205 --rc genhtml_branch_coverage=1 00:30:36.205 --rc genhtml_function_coverage=1 00:30:36.205 --rc genhtml_legend=1 00:30:36.205 --rc geninfo_all_blocks=1 00:30:36.205 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.206 --rc genhtml_branch_coverage=1 00:30:36.206 --rc genhtml_function_coverage=1 00:30:36.206 --rc genhtml_legend=1 00:30:36.206 --rc geninfo_all_blocks=1 00:30:36.206 --rc geninfo_unexecuted_blocks=1 00:30:36.206 00:30:36.206 ' 00:30:36.206 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.206 11:10:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:36.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.206 11:10:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:44.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:44.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:44.347 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:44.348 Found net devices under 0000:31:00.0: cvl_0_0 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:44.348 Found net devices under 0000:31:00.1: cvl_0_1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:30:44.348 00:30:44.348 --- 10.0.0.2 ping statistics --- 00:30:44.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.348 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:30:44.348 00:30:44.348 --- 10.0.0.1 ping statistics --- 00:30:44.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.348 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2004976 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2004976 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2004976 ']' 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.348 11:11:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.348 [2024-10-09 11:11:03.591652] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:44.348 [2024-10-09 11:11:03.591721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.348 [2024-10-09 11:11:03.733595] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:44.348 [2024-10-09 11:11:03.782898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:44.348 [2024-10-09 11:11:03.802392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.348 [2024-10-09 11:11:03.802428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.348 [2024-10-09 11:11:03.802437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.348 [2024-10-09 11:11:03.802444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.348 [2024-10-09 11:11:03.802450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.348 [2024-10-09 11:11:03.804012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.348 [2024-10-09 11:11:03.804169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.348 [2024-10-09 11:11:03.804170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.608 [2024-10-09 11:11:04.451111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.608 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 Malloc0 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 [2024-10-09 11:11:04.514449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 [2024-10-09 11:11:04.526367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 Malloc1 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2005081 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2005081 /var/tmp/bdevperf.sock 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2005081 ']' 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.609 11:11:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.550 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.550 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:30:45.550 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:45.550 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.550 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.811 NVMe0n1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.811 1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.811 request: 00:30:45.811 { 00:30:45.811 "name": "NVMe0", 00:30:45.811 "trtype": "tcp", 00:30:45.811 "traddr": "10.0.0.2", 00:30:45.811 "adrfam": "ipv4", 00:30:45.811 "trsvcid": "4420", 00:30:45.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.811 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:45.811 "hostaddr": "10.0.0.1", 00:30:45.811 "prchk_reftag": false, 00:30:45.811 "prchk_guard": false, 00:30:45.811 "hdgst": false, 00:30:45.811 "ddgst": false, 00:30:45.811 "allow_unrecognized_csi": false, 00:30:45.811 "method": "bdev_nvme_attach_controller", 00:30:45.811 "req_id": 1 00:30:45.811 } 00:30:45.811 Got JSON-RPC error response 00:30:45.811 response: 00:30:45.811 { 00:30:45.811 "code": -114, 00:30:45.811 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:45.811 } 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.811 request: 00:30:45.811 { 00:30:45.811 "name": "NVMe0", 00:30:45.811 "trtype": "tcp", 00:30:45.811 "traddr": "10.0.0.2", 00:30:45.811 "adrfam": "ipv4", 00:30:45.811 "trsvcid": "4420", 00:30:45.811 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:45.811 "hostaddr": "10.0.0.1", 00:30:45.811 "prchk_reftag": false, 00:30:45.811 "prchk_guard": false, 00:30:45.811 "hdgst": false, 00:30:45.811 "ddgst": false, 00:30:45.811 "allow_unrecognized_csi": false, 00:30:45.811 "method": "bdev_nvme_attach_controller", 00:30:45.811 "req_id": 1 00:30:45.811 } 00:30:45.811 Got JSON-RPC error response 00:30:45.811 response: 00:30:45.811 { 00:30:45.811 "code": -114, 00:30:45.811 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:45.811 } 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.811 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.812 request: 00:30:45.812 { 00:30:45.812 "name": "NVMe0", 00:30:45.812 "trtype": "tcp", 00:30:45.812 "traddr": "10.0.0.2", 00:30:45.812 "adrfam": "ipv4", 00:30:45.812 "trsvcid": "4420", 00:30:45.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.812 "hostaddr": "10.0.0.1", 00:30:45.812 "prchk_reftag": false, 00:30:45.812 "prchk_guard": false, 00:30:45.812 "hdgst": false, 00:30:45.812 "ddgst": false, 00:30:45.812 "multipath": "disable", 00:30:45.812 "allow_unrecognized_csi": false, 00:30:45.812 "method": "bdev_nvme_attach_controller", 00:30:45.812 "req_id": 1 00:30:45.812 } 00:30:45.812 Got JSON-RPC error response 00:30:45.812 response: 00:30:45.812 { 00:30:45.812 "code": -114, 00:30:45.812 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:45.812 } 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.812 request: 00:30:45.812 { 00:30:45.812 "name": "NVMe0", 00:30:45.812 "trtype": "tcp", 00:30:45.812 "traddr": "10.0.0.2", 00:30:45.812 "adrfam": "ipv4", 00:30:45.812 "trsvcid": "4420", 00:30:45.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:45.812 "hostaddr": "10.0.0.1", 00:30:45.812 "prchk_reftag": false, 00:30:45.812 "prchk_guard": false, 00:30:45.812 "hdgst": false, 00:30:45.812 "ddgst": false, 00:30:45.812 "multipath": "failover", 00:30:45.812 "allow_unrecognized_csi": false, 00:30:45.812 "method": "bdev_nvme_attach_controller", 00:30:45.812 "req_id": 1 00:30:45.812 } 00:30:45.812 Got JSON-RPC error response 00:30:45.812 response: 00:30:45.812 { 00:30:45.812 "code": -114, 00:30:45.812 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:45.812 } 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.812 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.072 NVMe0n1 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.072 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:46.072 11:11:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:47.457 { 00:30:47.457 "results": [ 00:30:47.457 { 00:30:47.457 "job": "NVMe0n1", 00:30:47.457 "core_mask": "0x1", 00:30:47.457 "workload": "write", 00:30:47.457 "status": "finished", 00:30:47.457 "queue_depth": 128, 00:30:47.457 "io_size": 4096, 00:30:47.457 "runtime": 1.006479, 00:30:47.457 "iops": 23098.34581744875, 00:30:47.457 "mibps": 90.22791334940918, 00:30:47.457 "io_failed": 0, 00:30:47.457 "io_timeout": 0, 00:30:47.457 "avg_latency_us": 5528.58152849882, 00:30:47.457 "min_latency_us": 3339.2048112261946, 00:30:47.457 "max_latency_us": 17079.2114934848 00:30:47.457 } 00:30:47.457 ], 00:30:47.457 "core_count": 1 00:30:47.457 } 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2005081 ']' 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005081' 00:30:47.457 killing process with pid 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2005081 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:47.457 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:47.457 [2024-10-09 11:11:04.646127] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:47.457 [2024-10-09 11:11:04.646184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005081 ] 00:30:47.457 [2024-10-09 11:11:04.776426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:47.457 [2024-10-09 11:11:04.807906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.457 [2024-10-09 11:11:04.826306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.457 [2024-10-09 11:11:05.910580] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 0be4a702-133b-4f1c-b85b-4317dc4f442b already exists 00:30:47.457 [2024-10-09 11:11:05.910610] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:0be4a702-133b-4f1c-b85b-4317dc4f442b alias for bdev NVMe1n1 00:30:47.457 [2024-10-09 11:11:05.910619] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:47.457 Running I/O for 1 seconds... 00:30:47.457 23057.00 IOPS, 90.07 MiB/s 00:30:47.457 Latency(us) 00:30:47.457 [2024-10-09T09:11:07.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.457 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:47.457 NVMe0n1 : 1.01 23098.35 90.23 0.00 0.00 5528.58 3339.20 17079.21 00:30:47.457 [2024-10-09T09:11:07.459Z] =================================================================================================================== 00:30:47.457 [2024-10-09T09:11:07.459Z] Total : 23098.35 90.23 0.00 0.00 5528.58 3339.20 17079.21 00:30:47.457 Received shutdown signal, test time was about 1.000000 seconds 00:30:47.457 00:30:47.457 Latency(us) 00:30:47.457 [2024-10-09T09:11:07.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.457 [2024-10-09T09:11:07.459Z] =================================================================================================================== 00:30:47.457 [2024-10-09T09:11:07.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.457 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.457 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.457 rmmod nvme_tcp 00:30:47.458 rmmod nvme_fabrics 00:30:47.458 rmmod nvme_keyring 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2004976 ']' 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2004976 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2004976 ']' 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2004976 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2004976 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2004976' 00:30:47.458 killing process with pid 2004976 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2004976 00:30:47.458 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2004976 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.719 11:11:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.632 11:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:49.632 00:30:49.632 real 0m13.841s 00:30:49.632 user 0m16.358s 00:30:49.632 sys 0m6.340s 00:30:49.632 11:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.632 11:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:49.632 ************************************ 00:30:49.632 END TEST nvmf_multicontroller 00:30:49.632 ************************************ 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.893 ************************************ 00:30:49.893 START TEST nvmf_aer 00:30:49.893 ************************************ 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:49.893 * Looking for test storage... 00:30:49.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:30:49.893 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:50.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.155 --rc genhtml_branch_coverage=1 00:30:50.155 --rc genhtml_function_coverage=1 00:30:50.155 --rc genhtml_legend=1 00:30:50.155 --rc geninfo_all_blocks=1 00:30:50.155 --rc geninfo_unexecuted_blocks=1 00:30:50.155 00:30:50.155 ' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:50.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.155 --rc genhtml_branch_coverage=1 00:30:50.155 --rc genhtml_function_coverage=1 00:30:50.155 --rc genhtml_legend=1 00:30:50.155 --rc geninfo_all_blocks=1 00:30:50.155 --rc geninfo_unexecuted_blocks=1 00:30:50.155 00:30:50.155 ' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:50.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.155 --rc genhtml_branch_coverage=1 00:30:50.155 --rc genhtml_function_coverage=1 00:30:50.155 --rc genhtml_legend=1 00:30:50.155 --rc geninfo_all_blocks=1 00:30:50.155 --rc geninfo_unexecuted_blocks=1 00:30:50.155 00:30:50.155 ' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:50.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.155 --rc genhtml_branch_coverage=1 00:30:50.155 --rc genhtml_function_coverage=1 00:30:50.155 --rc genhtml_legend=1 00:30:50.155 --rc geninfo_all_blocks=1 00:30:50.155 --rc geninfo_unexecuted_blocks=1 00:30:50.155 00:30:50.155 ' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:50.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:50.155 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:50.156 11:11:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.403 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:58.404 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:58.404 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:58.404 Found net devices under 0000:31:00.0: cvl_0_0 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:58.404 Found net devices under 0000:31:00.1: cvl_0_1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:30:58.404 00:30:58.404 --- 10.0.0.2 ping statistics --- 00:30:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.404 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:30:58.404 00:30:58.404 --- 10.0.0.1 ping statistics --- 00:30:58.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.404 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2009913 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2009913 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2009913 ']' 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:58.404 11:11:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.404 [2024-10-09 11:11:17.649031] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:30:58.404 [2024-10-09 11:11:17.649096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.404 [2024-10-09 11:11:17.790508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:58.404 [2024-10-09 11:11:17.821841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.404 [2024-10-09 11:11:17.840264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.404 [2024-10-09 11:11:17.840295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.404 [2024-10-09 11:11:17.840302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.404 [2024-10-09 11:11:17.840309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.404 [2024-10-09 11:11:17.840314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.404 [2024-10-09 11:11:17.841836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.404 [2024-10-09 11:11:17.841949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.404 [2024-10-09 11:11:17.842067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.404 [2024-10-09 11:11:17.842067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 [2024-10-09 11:11:18.511703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 Malloc0 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 [2024-10-09 11:11:18.579679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 [ 00:30:58.668 { 00:30:58.668 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:58.668 "subtype": "Discovery", 00:30:58.668 "listen_addresses": [], 00:30:58.668 "allow_any_host": true, 00:30:58.668 "hosts": [] 00:30:58.668 }, 00:30:58.668 { 00:30:58.668 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.668 "subtype": "NVMe", 00:30:58.668 "listen_addresses": [ 00:30:58.668 { 00:30:58.668 "trtype": "TCP", 00:30:58.668 "adrfam": "IPv4", 00:30:58.668 "traddr": "10.0.0.2", 00:30:58.668 "trsvcid": "4420" 00:30:58.668 } 00:30:58.668 ], 00:30:58.668 "allow_any_host": true, 00:30:58.668 "hosts": [], 00:30:58.668 "serial_number": "SPDK00000000000001", 00:30:58.668 "model_number": "SPDK bdev Controller", 00:30:58.668 "max_namespaces": 2, 00:30:58.668 "min_cntlid": 1, 00:30:58.668 "max_cntlid": 65519, 00:30:58.668 "namespaces": [ 00:30:58.668 { 00:30:58.668 "nsid": 1, 00:30:58.668 "bdev_name": "Malloc0", 00:30:58.668 "name": "Malloc0", 00:30:58.668 "nguid": "5F39E7C69AE74A068EC54B3C2D8B20BA", 00:30:58.668 "uuid": "5f39e7c6-9ae7-4a06-8ec5-4b3c2d8b20ba" 00:30:58.668 } 00:30:58.668 ] 00:30:58.668 } 00:30:58.668 ] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2010206 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:58.668 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:58.927 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.928 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 Malloc1 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 [ 00:30:59.188 { 00:30:59.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:59.188 "subtype": "Discovery", 00:30:59.188 "listen_addresses": [], 00:30:59.188 "allow_any_host": true, 00:30:59.188 "hosts": [] 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.188 "subtype": "NVMe", 00:30:59.188 "listen_addresses": [ 00:30:59.188 { 00:30:59.188 "trtype": "TCP", 00:30:59.188 "adrfam": "IPv4", 00:30:59.188 "traddr": "10.0.0.2", 00:30:59.188 "trsvcid": "4420" 00:30:59.188 } 00:30:59.188 ], 00:30:59.188 "allow_any_host": true, 00:30:59.188 "hosts": [], 00:30:59.188 "serial_number": "SPDK00000000000001", 00:30:59.188 "model_number": "SPDK bdev Controller", 00:30:59.188 "max_namespaces": 2, 00:30:59.188 "min_cntlid": 1, 00:30:59.188 "max_cntlid": 65519, 00:30:59.188 "namespaces": [ 00:30:59.188 { 00:30:59.188 "nsid": 1, 00:30:59.188 "bdev_name": "Malloc0", 00:30:59.188 "name": "Malloc0", 00:30:59.188 "nguid": "5F39E7C69AE74A068EC54B3C2D8B20BA", 00:30:59.188 "uuid": "5f39e7c6-9ae7-4a06-8ec5-4b3c2d8b20ba" 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "nsid": 2, 00:30:59.188 "bdev_name": "Malloc1", 00:30:59.188 "name": "Malloc1", 00:30:59.188 Asynchronous Event Request test 00:30:59.188 Attaching to 10.0.0.2 00:30:59.188 Attached to 10.0.0.2 00:30:59.188 Registering asynchronous event callbacks... 00:30:59.188 Starting namespace attribute notice tests for all controllers... 00:30:59.188 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:59.188 aer_cb - Changed Namespace 00:30:59.188 Cleaning up... 00:30:59.188 "nguid": "779DA30CA7984938A7FA3F077C5F58E2", 00:30:59.188 "uuid": "779da30c-a798-4938-a7fa-3f077c5f58e2" 00:30:59.188 } 00:30:59.188 ] 00:30:59.188 } 00:30:59.188 ] 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2010206 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.188 11:11:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.188 rmmod nvme_tcp 00:30:59.188 rmmod nvme_fabrics 00:30:59.188 rmmod nvme_keyring 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2009913 ']' 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2009913 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2009913 ']' 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2009913 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2009913 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2009913' 00:30:59.188 killing process with pid 2009913 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2009913 00:30:59.188 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2009913 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.448 11:11:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.359 11:11:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.620 00:31:01.620 real 0m11.646s 00:31:01.620 user 0m8.076s 00:31:01.620 sys 0m6.177s 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:01.620 ************************************ 00:31:01.620 END TEST nvmf_aer 00:31:01.620 ************************************ 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.620 ************************************ 00:31:01.620 START TEST nvmf_async_init 00:31:01.620 ************************************ 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:01.620 * Looking for test storage... 00:31:01.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:31:01.620 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:01.881 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.882 --rc genhtml_branch_coverage=1 00:31:01.882 --rc genhtml_function_coverage=1 00:31:01.882 --rc genhtml_legend=1 00:31:01.882 --rc geninfo_all_blocks=1 00:31:01.882 --rc geninfo_unexecuted_blocks=1 00:31:01.882 00:31:01.882 ' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.882 --rc genhtml_branch_coverage=1 00:31:01.882 --rc genhtml_function_coverage=1 00:31:01.882 --rc genhtml_legend=1 00:31:01.882 --rc geninfo_all_blocks=1 00:31:01.882 --rc geninfo_unexecuted_blocks=1 00:31:01.882 00:31:01.882 ' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.882 --rc genhtml_branch_coverage=1 00:31:01.882 --rc genhtml_function_coverage=1 00:31:01.882 --rc genhtml_legend=1 00:31:01.882 --rc geninfo_all_blocks=1 00:31:01.882 --rc geninfo_unexecuted_blocks=1 00:31:01.882 00:31:01.882 ' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:01.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.882 --rc genhtml_branch_coverage=1 00:31:01.882 --rc genhtml_function_coverage=1 00:31:01.882 --rc genhtml_legend=1 00:31:01.882 --rc geninfo_all_blocks=1 00:31:01.882 --rc geninfo_unexecuted_blocks=1 00:31:01.882 00:31:01.882 ' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:01.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8144524d135542969fbee4ae221ae507 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.882 11:11:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:10.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:10.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:10.036 Found net devices under 0000:31:00.0: cvl_0_0 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:10.036 Found net devices under 0000:31:00.1: cvl_0_1 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.036 11:11:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.036 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:31:10.037 00:31:10.037 --- 10.0.0.2 ping statistics --- 00:31:10.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.037 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:31:10.037 00:31:10.037 --- 10.0.0.1 ping statistics --- 00:31:10.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.037 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2014600 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2014600 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2014600 ']' 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:10.037 11:11:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.037 [2024-10-09 11:11:29.363199] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:31:10.037 [2024-10-09 11:11:29.363249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.037 [2024-10-09 11:11:29.499350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:10.037 [2024-10-09 11:11:29.530577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.037 [2024-10-09 11:11:29.547392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.037 [2024-10-09 11:11:29.547422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.037 [2024-10-09 11:11:29.547430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.037 [2024-10-09 11:11:29.547437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.037 [2024-10-09 11:11:29.547443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.037 [2024-10-09 11:11:29.548018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 [2024-10-09 11:11:30.207732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 null0 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8144524d135542969fbee4ae221ae507 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.297 [2024-10-09 11:11:30.247881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.297 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:10.298 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.298 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.558 nvme0n1 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.558 [ 00:31:10.558 { 00:31:10.558 "name": "nvme0n1", 00:31:10.558 "aliases": [ 00:31:10.558 "8144524d-1355-4296-9fbe-e4ae221ae507" 00:31:10.558 ], 00:31:10.558 "product_name": "NVMe disk", 00:31:10.558 "block_size": 512, 00:31:10.558 "num_blocks": 2097152, 00:31:10.558 "uuid": "8144524d-1355-4296-9fbe-e4ae221ae507", 00:31:10.558 "numa_id": 0, 00:31:10.558 "assigned_rate_limits": { 00:31:10.558 "rw_ios_per_sec": 0, 00:31:10.558 "rw_mbytes_per_sec": 0, 00:31:10.558 "r_mbytes_per_sec": 0, 00:31:10.558 "w_mbytes_per_sec": 0 00:31:10.558 }, 00:31:10.558 "claimed": false, 00:31:10.558 "zoned": false, 00:31:10.558 "supported_io_types": { 00:31:10.558 "read": true, 00:31:10.558 "write": true, 00:31:10.558 "unmap": false, 00:31:10.558 "flush": true, 00:31:10.558 "reset": true, 00:31:10.558 "nvme_admin": true, 00:31:10.558 "nvme_io": true, 00:31:10.558 "nvme_io_md": false, 00:31:10.558 "write_zeroes": true, 00:31:10.558 "zcopy": false, 00:31:10.558 "get_zone_info": false, 00:31:10.558 "zone_management": false, 00:31:10.558 "zone_append": false, 00:31:10.558 "compare": true, 00:31:10.558 "compare_and_write": true, 00:31:10.558 "abort": true, 00:31:10.558 "seek_hole": false, 00:31:10.558 "seek_data": false, 00:31:10.558 "copy": true, 00:31:10.558 "nvme_iov_md": false 00:31:10.558 }, 00:31:10.558 "memory_domains": [ 00:31:10.558 { 00:31:10.558 "dma_device_id": "system", 00:31:10.558 "dma_device_type": 1 00:31:10.558 } 00:31:10.558 ], 00:31:10.558 "driver_specific": { 00:31:10.558 "nvme": [ 00:31:10.558 { 00:31:10.558 "trid": { 00:31:10.558 "trtype": "TCP", 00:31:10.558 "adrfam": "IPv4", 00:31:10.558 "traddr": "10.0.0.2", 00:31:10.558 "trsvcid": "4420", 00:31:10.558 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.558 }, 00:31:10.558 "ctrlr_data": { 00:31:10.558 "cntlid": 1, 00:31:10.558 "vendor_id": "0x8086", 00:31:10.558 "model_number": "SPDK bdev Controller", 00:31:10.558 "serial_number": "00000000000000000000", 00:31:10.558 "firmware_revision": "25.01", 00:31:10.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.558 "oacs": { 00:31:10.558 "security": 0, 00:31:10.558 "format": 0, 00:31:10.558 "firmware": 0, 00:31:10.558 "ns_manage": 0 00:31:10.558 }, 00:31:10.558 "multi_ctrlr": true, 00:31:10.558 "ana_reporting": false 00:31:10.558 }, 00:31:10.558 "vs": { 00:31:10.558 "nvme_version": "1.3" 00:31:10.558 }, 00:31:10.558 "ns_data": { 00:31:10.558 "id": 1, 00:31:10.558 "can_share": true 00:31:10.558 } 00:31:10.558 } 00:31:10.558 ], 00:31:10.558 "mp_policy": "active_passive" 00:31:10.558 } 00:31:10.558 } 00:31:10.558 ] 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.558 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.558 [2024-10-09 11:11:30.497447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.558 [2024-10-09 11:11:30.497517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe03450 (9): Bad file descriptor 00:31:10.819 [2024-10-09 11:11:30.629575] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 [ 00:31:10.819 { 00:31:10.819 "name": "nvme0n1", 00:31:10.819 "aliases": [ 00:31:10.819 "8144524d-1355-4296-9fbe-e4ae221ae507" 00:31:10.819 ], 00:31:10.819 "product_name": "NVMe disk", 00:31:10.819 "block_size": 512, 00:31:10.819 "num_blocks": 2097152, 00:31:10.819 "uuid": "8144524d-1355-4296-9fbe-e4ae221ae507", 00:31:10.819 "numa_id": 0, 00:31:10.819 "assigned_rate_limits": { 00:31:10.819 "rw_ios_per_sec": 0, 00:31:10.819 "rw_mbytes_per_sec": 0, 00:31:10.819 "r_mbytes_per_sec": 0, 00:31:10.819 "w_mbytes_per_sec": 0 00:31:10.819 }, 00:31:10.819 "claimed": false, 00:31:10.819 "zoned": false, 00:31:10.819 "supported_io_types": { 00:31:10.819 "read": true, 00:31:10.819 "write": true, 00:31:10.819 "unmap": false, 00:31:10.819 "flush": true, 00:31:10.819 "reset": true, 00:31:10.819 "nvme_admin": true, 00:31:10.819 "nvme_io": true, 00:31:10.819 "nvme_io_md": false, 00:31:10.819 "write_zeroes": true, 00:31:10.819 "zcopy": false, 00:31:10.819 "get_zone_info": false, 00:31:10.819 "zone_management": false, 00:31:10.819 "zone_append": false, 00:31:10.819 "compare": true, 00:31:10.819 "compare_and_write": true, 00:31:10.819 "abort": true, 00:31:10.819 "seek_hole": false, 00:31:10.819 "seek_data": false, 00:31:10.819 "copy": true, 00:31:10.819 "nvme_iov_md": false 00:31:10.819 }, 00:31:10.819 "memory_domains": [ 00:31:10.819 { 00:31:10.819 "dma_device_id": "system", 00:31:10.819 "dma_device_type": 1 00:31:10.819 } 00:31:10.819 ], 00:31:10.819 "driver_specific": { 00:31:10.819 "nvme": [ 00:31:10.819 { 00:31:10.819 "trid": { 00:31:10.819 "trtype": "TCP", 00:31:10.819 "adrfam": "IPv4", 00:31:10.819 "traddr": "10.0.0.2", 00:31:10.819 "trsvcid": "4420", 00:31:10.819 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.819 }, 00:31:10.819 "ctrlr_data": { 00:31:10.819 "cntlid": 2, 00:31:10.819 "vendor_id": "0x8086", 00:31:10.819 "model_number": "SPDK bdev Controller", 00:31:10.819 "serial_number": "00000000000000000000", 00:31:10.819 "firmware_revision": "25.01", 00:31:10.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.819 "oacs": { 00:31:10.819 "security": 0, 00:31:10.819 "format": 0, 00:31:10.819 "firmware": 0, 00:31:10.819 "ns_manage": 0 00:31:10.819 }, 00:31:10.819 "multi_ctrlr": true, 00:31:10.819 "ana_reporting": false 00:31:10.819 }, 00:31:10.819 "vs": { 00:31:10.819 "nvme_version": "1.3" 00:31:10.819 }, 00:31:10.819 "ns_data": { 00:31:10.819 "id": 1, 00:31:10.819 "can_share": true 00:31:10.819 } 00:31:10.819 } 00:31:10.819 ], 00:31:10.819 "mp_policy": "active_passive" 00:31:10.819 } 00:31:10.819 } 00:31:10.819 ] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Ar5P9WPZDD 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Ar5P9WPZDD 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Ar5P9WPZDD 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 [2024-10-09 11:11:30.693629] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:10.819 [2024-10-09 11:11:30.693753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 [2024-10-09 11:11:30.713653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:10.819 nvme0n1 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.819 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.819 [ 00:31:10.819 { 00:31:10.819 "name": "nvme0n1", 00:31:10.819 "aliases": [ 00:31:10.819 "8144524d-1355-4296-9fbe-e4ae221ae507" 00:31:10.819 ], 00:31:10.819 "product_name": "NVMe disk", 00:31:10.819 "block_size": 512, 00:31:10.819 "num_blocks": 2097152, 00:31:10.819 "uuid": "8144524d-1355-4296-9fbe-e4ae221ae507", 00:31:10.819 "numa_id": 0, 00:31:10.819 "assigned_rate_limits": { 00:31:10.819 "rw_ios_per_sec": 0, 00:31:10.819 "rw_mbytes_per_sec": 0, 00:31:10.819 "r_mbytes_per_sec": 0, 00:31:10.819 "w_mbytes_per_sec": 0 00:31:10.819 }, 00:31:10.819 "claimed": false, 00:31:10.819 "zoned": false, 00:31:10.819 "supported_io_types": { 00:31:10.819 "read": true, 00:31:10.819 "write": true, 00:31:10.819 "unmap": false, 00:31:10.819 "flush": true, 00:31:10.819 "reset": true, 00:31:10.819 "nvme_admin": true, 00:31:10.819 "nvme_io": true, 00:31:10.819 "nvme_io_md": false, 00:31:10.819 "write_zeroes": true, 00:31:10.819 "zcopy": false, 00:31:10.819 "get_zone_info": false, 00:31:10.819 "zone_management": false, 00:31:10.819 "zone_append": false, 00:31:10.819 "compare": true, 00:31:10.819 "compare_and_write": true, 00:31:10.819 "abort": true, 00:31:10.819 "seek_hole": false, 00:31:10.819 "seek_data": false, 00:31:10.819 "copy": true, 00:31:10.819 "nvme_iov_md": false 00:31:10.819 }, 00:31:10.819 "memory_domains": [ 00:31:10.819 { 00:31:10.819 "dma_device_id": "system", 00:31:10.819 "dma_device_type": 1 00:31:10.819 } 00:31:10.819 ], 00:31:10.819 "driver_specific": { 00:31:10.819 "nvme": [ 00:31:10.819 { 00:31:10.819 "trid": { 00:31:10.819 "trtype": "TCP", 00:31:10.819 "adrfam": "IPv4", 00:31:10.819 "traddr": "10.0.0.2", 00:31:10.819 "trsvcid": "4421", 00:31:10.819 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.819 }, 00:31:10.819 "ctrlr_data": { 00:31:10.819 "cntlid": 3, 00:31:10.819 "vendor_id": "0x8086", 00:31:10.819 "model_number": "SPDK bdev Controller", 00:31:10.819 "serial_number": "00000000000000000000", 00:31:10.819 "firmware_revision": "25.01", 00:31:10.819 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.819 "oacs": { 00:31:10.819 "security": 0, 00:31:10.819 "format": 0, 00:31:10.819 "firmware": 0, 00:31:10.819 "ns_manage": 0 00:31:10.819 }, 00:31:10.819 "multi_ctrlr": true, 00:31:10.819 "ana_reporting": false 00:31:10.819 }, 00:31:10.819 "vs": { 00:31:10.819 "nvme_version": "1.3" 00:31:10.819 }, 00:31:10.820 "ns_data": { 00:31:10.820 "id": 1, 00:31:10.820 "can_share": true 00:31:10.820 } 00:31:10.820 } 00:31:10.820 ], 00:31:10.820 "mp_policy": "active_passive" 00:31:10.820 } 00:31:10.820 } 00:31:10.820 ] 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Ar5P9WPZDD 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.820 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.080 rmmod nvme_tcp 00:31:11.080 rmmod nvme_fabrics 00:31:11.080 rmmod nvme_keyring 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2014600 ']' 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2014600 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2014600 ']' 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2014600 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2014600 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2014600' 00:31:11.080 killing process with pid 2014600 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2014600 00:31:11.080 11:11:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2014600 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.080 11:11:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.623 00:31:13.623 real 0m11.682s 00:31:13.623 user 0m4.062s 00:31:13.623 sys 0m5.989s 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:13.623 ************************************ 00:31:13.623 END TEST nvmf_async_init 00:31:13.623 ************************************ 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.623 ************************************ 00:31:13.623 START TEST dma 00:31:13.623 ************************************ 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:13.623 * Looking for test storage... 00:31:13.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.623 --rc genhtml_branch_coverage=1 00:31:13.623 --rc genhtml_function_coverage=1 00:31:13.623 --rc genhtml_legend=1 00:31:13.623 --rc geninfo_all_blocks=1 00:31:13.623 --rc geninfo_unexecuted_blocks=1 00:31:13.623 00:31:13.623 ' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.623 --rc genhtml_branch_coverage=1 00:31:13.623 --rc genhtml_function_coverage=1 00:31:13.623 --rc genhtml_legend=1 00:31:13.623 --rc geninfo_all_blocks=1 00:31:13.623 --rc geninfo_unexecuted_blocks=1 00:31:13.623 00:31:13.623 ' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.623 --rc genhtml_branch_coverage=1 00:31:13.623 --rc genhtml_function_coverage=1 00:31:13.623 --rc genhtml_legend=1 00:31:13.623 --rc geninfo_all_blocks=1 00:31:13.623 --rc geninfo_unexecuted_blocks=1 00:31:13.623 00:31:13.623 ' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:13.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.623 --rc genhtml_branch_coverage=1 00:31:13.623 --rc genhtml_function_coverage=1 00:31:13.623 --rc genhtml_legend=1 00:31:13.623 --rc geninfo_all_blocks=1 00:31:13.623 --rc geninfo_unexecuted_blocks=1 00:31:13.623 00:31:13.623 ' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.623 11:11:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:13.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:13.624 00:31:13.624 real 0m0.226s 00:31:13.624 user 0m0.129s 00:31:13.624 sys 0m0.107s 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:13.624 ************************************ 00:31:13.624 END TEST dma 00:31:13.624 ************************************ 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.624 ************************************ 00:31:13.624 START TEST nvmf_identify 00:31:13.624 ************************************ 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:13.624 * Looking for test storage... 00:31:13.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:31:13.624 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:13.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.886 --rc genhtml_branch_coverage=1 00:31:13.886 --rc genhtml_function_coverage=1 00:31:13.886 --rc genhtml_legend=1 00:31:13.886 --rc geninfo_all_blocks=1 00:31:13.886 --rc geninfo_unexecuted_blocks=1 00:31:13.886 00:31:13.886 ' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:13.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.886 --rc genhtml_branch_coverage=1 00:31:13.886 --rc genhtml_function_coverage=1 00:31:13.886 --rc genhtml_legend=1 00:31:13.886 --rc geninfo_all_blocks=1 00:31:13.886 --rc geninfo_unexecuted_blocks=1 00:31:13.886 00:31:13.886 ' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:13.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.886 --rc genhtml_branch_coverage=1 00:31:13.886 --rc genhtml_function_coverage=1 00:31:13.886 --rc genhtml_legend=1 00:31:13.886 --rc geninfo_all_blocks=1 00:31:13.886 --rc geninfo_unexecuted_blocks=1 00:31:13.886 00:31:13.886 ' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:13.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.886 --rc genhtml_branch_coverage=1 00:31:13.886 --rc genhtml_function_coverage=1 00:31:13.886 --rc genhtml_legend=1 00:31:13.886 --rc geninfo_all_blocks=1 00:31:13.886 --rc geninfo_unexecuted_blocks=1 00:31:13.886 00:31:13.886 ' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:13.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:13.886 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.887 11:11:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.025 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:22.026 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.026 11:11:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:22.026 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:22.026 Found net devices under 0000:31:00.0: cvl_0_0 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:22.026 Found net devices under 0000:31:00.1: cvl_0_1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:31:22.026 00:31:22.026 --- 10.0.0.2 ping statistics --- 00:31:22.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.026 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:31:22.026 00:31:22.026 --- 10.0.0.1 ping statistics --- 00:31:22.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.026 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2019159 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2019159 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2019159 ']' 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.026 11:11:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.026 [2024-10-09 11:11:41.446257] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:31:22.026 [2024-10-09 11:11:41.446322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.026 [2024-10-09 11:11:41.588871] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:22.026 [2024-10-09 11:11:41.622696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.026 [2024-10-09 11:11:41.646905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.026 [2024-10-09 11:11:41.646949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.026 [2024-10-09 11:11:41.646957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.026 [2024-10-09 11:11:41.646965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.026 [2024-10-09 11:11:41.646971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.026 [2024-10-09 11:11:41.649026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.027 [2024-10-09 11:11:41.649148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.027 [2024-10-09 11:11:41.649306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.027 [2024-10-09 11:11:41.649307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.288 [2024-10-09 11:11:42.270163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.288 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 Malloc0 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 [2024-10-09 11:11:42.384724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 [ 00:31:22.549 { 00:31:22.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:22.549 "subtype": "Discovery", 00:31:22.549 "listen_addresses": [ 00:31:22.549 { 00:31:22.549 "trtype": "TCP", 00:31:22.549 "adrfam": "IPv4", 00:31:22.549 "traddr": "10.0.0.2", 00:31:22.549 "trsvcid": "4420" 00:31:22.549 } 00:31:22.549 ], 00:31:22.549 "allow_any_host": true, 00:31:22.549 "hosts": [] 00:31:22.549 }, 00:31:22.549 { 00:31:22.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.549 "subtype": "NVMe", 00:31:22.549 "listen_addresses": [ 00:31:22.549 { 00:31:22.549 "trtype": "TCP", 00:31:22.549 "adrfam": "IPv4", 00:31:22.549 "traddr": "10.0.0.2", 00:31:22.549 "trsvcid": "4420" 00:31:22.549 } 00:31:22.549 ], 00:31:22.549 "allow_any_host": true, 00:31:22.549 "hosts": [], 00:31:22.549 "serial_number": "SPDK00000000000001", 00:31:22.549 "model_number": "SPDK bdev Controller", 00:31:22.549 "max_namespaces": 32, 00:31:22.549 "min_cntlid": 1, 00:31:22.549 "max_cntlid": 65519, 00:31:22.549 "namespaces": [ 00:31:22.549 { 00:31:22.549 "nsid": 1, 00:31:22.549 "bdev_name": "Malloc0", 00:31:22.549 "name": "Malloc0", 00:31:22.549 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:22.549 "eui64": "ABCDEF0123456789", 00:31:22.549 "uuid": "0e836a8d-f251-49c2-b927-bfce9436f94d" 00:31:22.549 } 00:31:22.549 ] 00:31:22.549 } 00:31:22.549 ] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.549 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:22.549 [2024-10-09 11:11:42.447213] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:31:22.549 [2024-10-09 11:11:42.447269] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019425 ] 00:31:22.816 [2024-10-09 11:11:42.562879] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:22.816 [2024-10-09 11:11:42.582140] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:22.816 [2024-10-09 11:11:42.582188] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:22.816 [2024-10-09 11:11:42.582193] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:22.816 [2024-10-09 11:11:42.582207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:22.816 [2024-10-09 11:11:42.582217] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:22.816 [2024-10-09 11:11:42.585755] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:22.816 [2024-10-09 11:11:42.585803] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f0b060 0 00:31:22.816 [2024-10-09 11:11:42.586035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:22.816 [2024-10-09 11:11:42.586043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:22.816 [2024-10-09 11:11:42.586049] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:22.816 [2024-10-09 11:11:42.586052] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:22.816 [2024-10-09 11:11:42.586078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.586084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.586088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.586102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:22.816 [2024-10-09 11:11:42.586116] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.592476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.592486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.592490] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.592508] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:22.816 [2024-10-09 11:11:42.592515] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:22.816 [2024-10-09 11:11:42.592521] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:22.816 [2024-10-09 11:11:42.592535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592543] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.592551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.592564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.592638] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.592645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.592648] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592652] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.592657] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:22.816 [2024-10-09 11:11:42.592665] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:22.816 [2024-10-09 11:11:42.592672] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.592686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.592697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.592762] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.592769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.592775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.592785] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:22.816 [2024-10-09 11:11:42.592793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.592800] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.592814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.592824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.592887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.592894] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.592897] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.592907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.592916] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.592923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.592930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.592940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.593009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.593015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.593019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.593028] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:22.816 [2024-10-09 11:11:42.593033] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.593040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.593145] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:22.816 [2024-10-09 11:11:42.593150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.593159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593163] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.593173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.593183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.593260] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.593266] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.593270] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593274] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.593279] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:22.816 [2024-10-09 11:11:42.593288] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593291] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.816 [2024-10-09 11:11:42.593302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.816 [2024-10-09 11:11:42.593312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.816 [2024-10-09 11:11:42.593380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.816 [2024-10-09 11:11:42.593387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.816 [2024-10-09 11:11:42.593390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.816 [2024-10-09 11:11:42.593394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.816 [2024-10-09 11:11:42.593399] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:22.816 [2024-10-09 11:11:42.593404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.593411] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:22.817 [2024-10-09 11:11:42.593423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.593432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.593435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.593442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.817 [2024-10-09 11:11:42.593453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.817 [2024-10-09 11:11:42.593550] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.817 [2024-10-09 11:11:42.593557] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.817 [2024-10-09 11:11:42.593561] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.593565] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f0b060): datao=0, datal=4096, cccid=0 00:31:22.817 [2024-10-09 11:11:42.593570] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f77a80) on tqpair(0x1f0b060): expected_datao=0, payload_size=4096 00:31:22.817 [2024-10-09 11:11:42.593575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.593583] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.593587] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.817 [2024-10-09 11:11:42.638482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.817 [2024-10-09 11:11:42.638486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638490] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.817 [2024-10-09 11:11:42.638501] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:22.817 [2024-10-09 11:11:42.638507] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:22.817 [2024-10-09 11:11:42.638511] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:22.817 [2024-10-09 11:11:42.638517] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:22.817 [2024-10-09 11:11:42.638522] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:22.817 [2024-10-09 11:11:42.638527] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.638535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.638543] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638547] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638550] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:22.817 [2024-10-09 11:11:42.638571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.817 [2024-10-09 11:11:42.638746] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.817 [2024-10-09 11:11:42.638752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.817 [2024-10-09 11:11:42.638756] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.817 [2024-10-09 11:11:42.638767] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638771] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.817 [2024-10-09 11:11:42.638788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638795] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.817 [2024-10-09 11:11:42.638807] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.817 [2024-10-09 11:11:42.638827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638830] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638834] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.817 [2024-10-09 11:11:42.638847] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.638858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:22.817 [2024-10-09 11:11:42.638865] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.638868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.638875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.817 [2024-10-09 11:11:42.638887] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77a80, cid 0, qid 0 00:31:22.817 [2024-10-09 11:11:42.638892] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77c00, cid 1, qid 0 00:31:22.817 [2024-10-09 11:11:42.638897] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77d80, cid 2, qid 0 00:31:22.817 [2024-10-09 11:11:42.638902] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.817 [2024-10-09 11:11:42.638907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78080, cid 4, qid 0 00:31:22.817 [2024-10-09 11:11:42.639034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.817 [2024-10-09 11:11:42.639040] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.817 [2024-10-09 11:11:42.639044] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639048] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78080) on tqpair=0x1f0b060 00:31:22.817 [2024-10-09 11:11:42.639053] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:22.817 [2024-10-09 11:11:42.639058] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:22.817 [2024-10-09 11:11:42.639068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.639078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.817 [2024-10-09 11:11:42.639088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78080, cid 4, qid 0 00:31:22.817 [2024-10-09 11:11:42.639178] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.817 [2024-10-09 11:11:42.639185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.817 [2024-10-09 11:11:42.639188] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639192] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f0b060): datao=0, datal=4096, cccid=4 00:31:22.817 [2024-10-09 11:11:42.639197] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f78080) on tqpair(0x1f0b060): expected_datao=0, payload_size=4096 00:31:22.817 [2024-10-09 11:11:42.639201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639208] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639212] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.817 [2024-10-09 11:11:42.639228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.817 [2024-10-09 11:11:42.639232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78080) on tqpair=0x1f0b060 00:31:22.817 [2024-10-09 11:11:42.639248] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:22.817 [2024-10-09 11:11:42.639273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639280] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.639287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.817 [2024-10-09 11:11:42.639294] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639298] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f0b060) 00:31:22.817 [2024-10-09 11:11:42.639307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.817 [2024-10-09 11:11:42.639319] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78080, cid 4, qid 0 00:31:22.817 [2024-10-09 11:11:42.639324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78200, cid 5, qid 0 00:31:22.817 [2024-10-09 11:11:42.639440] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.817 [2024-10-09 11:11:42.639446] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.817 [2024-10-09 11:11:42.639450] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f0b060): datao=0, datal=1024, cccid=4 00:31:22.817 [2024-10-09 11:11:42.639458] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f78080) on tqpair(0x1f0b060): expected_datao=0, payload_size=1024 00:31:22.817 [2024-10-09 11:11:42.639462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639473] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639477] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639483] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.817 [2024-10-09 11:11:42.639489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.817 [2024-10-09 11:11:42.639492] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.817 [2024-10-09 11:11:42.639496] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78200) on tqpair=0x1f0b060 00:31:22.818 [2024-10-09 11:11:42.679548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.818 [2024-10-09 11:11:42.679558] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.818 [2024-10-09 11:11:42.679562] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.679566] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78080) on tqpair=0x1f0b060 00:31:22.818 [2024-10-09 11:11:42.679577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.679581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f0b060) 00:31:22.818 [2024-10-09 11:11:42.679588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.818 [2024-10-09 11:11:42.679603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78080, cid 4, qid 0 00:31:22.818 [2024-10-09 11:11:42.679680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.818 [2024-10-09 11:11:42.679686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.818 [2024-10-09 11:11:42.679690] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.679694] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f0b060): datao=0, datal=3072, cccid=4 00:31:22.818 [2024-10-09 11:11:42.679698] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f78080) on tqpair(0x1f0b060): expected_datao=0, payload_size=3072 00:31:22.818 [2024-10-09 11:11:42.679703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.679717] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.679721] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.818 [2024-10-09 11:11:42.720569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.818 [2024-10-09 11:11:42.720572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720576] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78080) on tqpair=0x1f0b060 00:31:22.818 [2024-10-09 11:11:42.720585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f0b060) 00:31:22.818 [2024-10-09 11:11:42.720595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.818 [2024-10-09 11:11:42.720609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f78080, cid 4, qid 0 00:31:22.818 [2024-10-09 11:11:42.720709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.818 [2024-10-09 11:11:42.720715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.818 [2024-10-09 11:11:42.720719] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720723] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f0b060): datao=0, datal=8, cccid=4 00:31:22.818 [2024-10-09 11:11:42.720727] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f78080) on tqpair(0x1f0b060): expected_datao=0, payload_size=8 00:31:22.818 [2024-10-09 11:11:42.720732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720738] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.720742] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.765473] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.818 [2024-10-09 11:11:42.765482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.818 [2024-10-09 11:11:42.765485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.818 [2024-10-09 11:11:42.765489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f78080) on tqpair=0x1f0b060 00:31:22.818 ===================================================== 00:31:22.818 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:22.818 ===================================================== 00:31:22.818 Controller Capabilities/Features 00:31:22.818 ================================ 00:31:22.818 Vendor ID: 0000 00:31:22.818 Subsystem Vendor ID: 0000 00:31:22.818 Serial Number: .................... 00:31:22.818 Model Number: ........................................ 00:31:22.818 Firmware Version: 25.01 00:31:22.818 Recommended Arb Burst: 0 00:31:22.818 IEEE OUI Identifier: 00 00 00 00:31:22.818 Multi-path I/O 00:31:22.818 May have multiple subsystem ports: No 00:31:22.818 May have multiple controllers: No 00:31:22.818 Associated with SR-IOV VF: No 00:31:22.818 Max Data Transfer Size: 131072 00:31:22.818 Max Number of Namespaces: 0 00:31:22.818 Max Number of I/O Queues: 1024 00:31:22.818 NVMe Specification Version (VS): 1.3 00:31:22.818 NVMe Specification Version (Identify): 1.3 00:31:22.818 Maximum Queue Entries: 128 00:31:22.818 Contiguous Queues Required: Yes 00:31:22.818 Arbitration Mechanisms Supported 00:31:22.818 Weighted Round Robin: Not Supported 00:31:22.818 Vendor Specific: Not Supported 00:31:22.818 Reset Timeout: 15000 ms 00:31:22.818 Doorbell Stride: 4 bytes 00:31:22.818 NVM Subsystem Reset: Not Supported 00:31:22.818 Command Sets Supported 00:31:22.818 NVM Command Set: Supported 00:31:22.818 Boot Partition: Not Supported 00:31:22.818 Memory Page Size Minimum: 4096 bytes 00:31:22.818 Memory Page Size Maximum: 4096 bytes 00:31:22.818 Persistent Memory Region: Not Supported 00:31:22.818 Optional Asynchronous Events Supported 00:31:22.818 Namespace Attribute Notices: Not Supported 00:31:22.818 Firmware Activation Notices: Not Supported 00:31:22.818 ANA Change Notices: Not Supported 00:31:22.818 PLE Aggregate Log Change Notices: Not Supported 00:31:22.818 LBA Status Info Alert Notices: Not Supported 00:31:22.818 EGE Aggregate Log Change Notices: Not Supported 00:31:22.818 Normal NVM Subsystem Shutdown event: Not Supported 00:31:22.818 Zone Descriptor Change Notices: Not Supported 00:31:22.818 Discovery Log Change Notices: Supported 00:31:22.818 Controller Attributes 00:31:22.818 128-bit Host Identifier: Not Supported 00:31:22.818 Non-Operational Permissive Mode: Not Supported 00:31:22.818 NVM Sets: Not Supported 00:31:22.818 Read Recovery Levels: Not Supported 00:31:22.818 Endurance Groups: Not Supported 00:31:22.818 Predictable Latency Mode: Not Supported 00:31:22.818 Traffic Based Keep ALive: Not Supported 00:31:22.818 Namespace Granularity: Not Supported 00:31:22.818 SQ Associations: Not Supported 00:31:22.818 UUID List: Not Supported 00:31:22.818 Multi-Domain Subsystem: Not Supported 00:31:22.818 Fixed Capacity Management: Not Supported 00:31:22.818 Variable Capacity Management: Not Supported 00:31:22.818 Delete Endurance Group: Not Supported 00:31:22.818 Delete NVM Set: Not Supported 00:31:22.818 Extended LBA Formats Supported: Not Supported 00:31:22.818 Flexible Data Placement Supported: Not Supported 00:31:22.818 00:31:22.818 Controller Memory Buffer Support 00:31:22.818 ================================ 00:31:22.818 Supported: No 00:31:22.818 00:31:22.818 Persistent Memory Region Support 00:31:22.818 ================================ 00:31:22.818 Supported: No 00:31:22.818 00:31:22.818 Admin Command Set Attributes 00:31:22.818 ============================ 00:31:22.818 Security Send/Receive: Not Supported 00:31:22.818 Format NVM: Not Supported 00:31:22.818 Firmware Activate/Download: Not Supported 00:31:22.818 Namespace Management: Not Supported 00:31:22.818 Device Self-Test: Not Supported 00:31:22.818 Directives: Not Supported 00:31:22.818 NVMe-MI: Not Supported 00:31:22.818 Virtualization Management: Not Supported 00:31:22.818 Doorbell Buffer Config: Not Supported 00:31:22.818 Get LBA Status Capability: Not Supported 00:31:22.818 Command & Feature Lockdown Capability: Not Supported 00:31:22.818 Abort Command Limit: 1 00:31:22.818 Async Event Request Limit: 4 00:31:22.818 Number of Firmware Slots: N/A 00:31:22.818 Firmware Slot 1 Read-Only: N/A 00:31:22.818 Firmware Activation Without Reset: N/A 00:31:22.818 Multiple Update Detection Support: N/A 00:31:22.818 Firmware Update Granularity: No Information Provided 00:31:22.818 Per-Namespace SMART Log: No 00:31:22.818 Asymmetric Namespace Access Log Page: Not Supported 00:31:22.818 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:22.818 Command Effects Log Page: Not Supported 00:31:22.818 Get Log Page Extended Data: Supported 00:31:22.818 Telemetry Log Pages: Not Supported 00:31:22.818 Persistent Event Log Pages: Not Supported 00:31:22.818 Supported Log Pages Log Page: May Support 00:31:22.818 Commands Supported & Effects Log Page: Not Supported 00:31:22.818 Feature Identifiers & Effects Log Page:May Support 00:31:22.818 NVMe-MI Commands & Effects Log Page: May Support 00:31:22.818 Data Area 4 for Telemetry Log: Not Supported 00:31:22.818 Error Log Page Entries Supported: 128 00:31:22.818 Keep Alive: Not Supported 00:31:22.818 00:31:22.818 NVM Command Set Attributes 00:31:22.818 ========================== 00:31:22.818 Submission Queue Entry Size 00:31:22.818 Max: 1 00:31:22.818 Min: 1 00:31:22.818 Completion Queue Entry Size 00:31:22.818 Max: 1 00:31:22.818 Min: 1 00:31:22.818 Number of Namespaces: 0 00:31:22.818 Compare Command: Not Supported 00:31:22.818 Write Uncorrectable Command: Not Supported 00:31:22.818 Dataset Management Command: Not Supported 00:31:22.818 Write Zeroes Command: Not Supported 00:31:22.818 Set Features Save Field: Not Supported 00:31:22.818 Reservations: Not Supported 00:31:22.818 Timestamp: Not Supported 00:31:22.818 Copy: Not Supported 00:31:22.818 Volatile Write Cache: Not Present 00:31:22.818 Atomic Write Unit (Normal): 1 00:31:22.818 Atomic Write Unit (PFail): 1 00:31:22.818 Atomic Compare & Write Unit: 1 00:31:22.818 Fused Compare & Write: Supported 00:31:22.818 Scatter-Gather List 00:31:22.818 SGL Command Set: Supported 00:31:22.818 SGL Keyed: Supported 00:31:22.818 SGL Bit Bucket Descriptor: Not Supported 00:31:22.818 SGL Metadata Pointer: Not Supported 00:31:22.818 Oversized SGL: Not Supported 00:31:22.818 SGL Metadata Address: Not Supported 00:31:22.818 SGL Offset: Supported 00:31:22.818 Transport SGL Data Block: Not Supported 00:31:22.818 Replay Protected Memory Block: Not Supported 00:31:22.818 00:31:22.818 Firmware Slot Information 00:31:22.818 ========================= 00:31:22.819 Active slot: 0 00:31:22.819 00:31:22.819 00:31:22.819 Error Log 00:31:22.819 ========= 00:31:22.819 00:31:22.819 Active Namespaces 00:31:22.819 ================= 00:31:22.819 Discovery Log Page 00:31:22.819 ================== 00:31:22.819 Generation Counter: 2 00:31:22.819 Number of Records: 2 00:31:22.819 Record Format: 0 00:31:22.819 00:31:22.819 Discovery Log Entry 0 00:31:22.819 ---------------------- 00:31:22.819 Transport Type: 3 (TCP) 00:31:22.819 Address Family: 1 (IPv4) 00:31:22.819 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:22.819 Entry Flags: 00:31:22.819 Duplicate Returned Information: 1 00:31:22.819 Explicit Persistent Connection Support for Discovery: 1 00:31:22.819 Transport Requirements: 00:31:22.819 Secure Channel: Not Required 00:31:22.819 Port ID: 0 (0x0000) 00:31:22.819 Controller ID: 65535 (0xffff) 00:31:22.819 Admin Max SQ Size: 128 00:31:22.819 Transport Service Identifier: 4420 00:31:22.819 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:22.819 Transport Address: 10.0.0.2 00:31:22.819 Discovery Log Entry 1 00:31:22.819 ---------------------- 00:31:22.819 Transport Type: 3 (TCP) 00:31:22.819 Address Family: 1 (IPv4) 00:31:22.819 Subsystem Type: 2 (NVM Subsystem) 00:31:22.819 Entry Flags: 00:31:22.819 Duplicate Returned Information: 0 00:31:22.819 Explicit Persistent Connection Support for Discovery: 0 00:31:22.819 Transport Requirements: 00:31:22.819 Secure Channel: Not Required 00:31:22.819 Port ID: 0 (0x0000) 00:31:22.819 Controller ID: 65535 (0xffff) 00:31:22.819 Admin Max SQ Size: 128 00:31:22.819 Transport Service Identifier: 4420 00:31:22.819 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:22.819 Transport Address: 10.0.0.2 [2024-10-09 11:11:42.765572] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:22.819 [2024-10-09 11:11:42.765584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77a80) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.765590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.819 [2024-10-09 11:11:42.765596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77c00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.765601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.819 [2024-10-09 11:11:42.765606] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77d80) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.765610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.819 [2024-10-09 11:11:42.765615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.765620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.819 [2024-10-09 11:11:42.765629] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.765633] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.765636] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.765643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.765657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.765929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.765936] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.765939] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.765943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.765950] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.765954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.765957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.765964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.765977] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766084] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766087] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766091] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766096] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:22.819 [2024-10-09 11:11:42.766104] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:22.819 [2024-10-09 11:11:42.766113] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766117] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766238] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766251] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766259] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766338] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766348] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766361] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766387] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766515] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766525] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766529] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766539] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766543] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766564] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766652] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766655] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766659] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766669] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766796] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766802] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766805] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766809] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766819] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766823] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.819 [2024-10-09 11:11:42.766833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.819 [2024-10-09 11:11:42.766843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.819 [2024-10-09 11:11:42.766905] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.819 [2024-10-09 11:11:42.766911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.819 [2024-10-09 11:11:42.766915] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.819 [2024-10-09 11:11:42.766928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766932] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.819 [2024-10-09 11:11:42.766936] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.766942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.766956] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767071] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767198] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767204] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767221] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767225] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767229] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767361] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767374] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767378] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767382] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767470] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767477] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767481] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767602] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767625] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767629] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767766] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767776] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767780] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.767904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.767910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.767913] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767917] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.767927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767931] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.767935] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.767941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.767951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.768019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.768026] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.768029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.768043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.768057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.768067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.768157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.768166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.768169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.768183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768190] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.768197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.768207] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.820 [2024-10-09 11:11:42.768307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.820 [2024-10-09 11:11:42.768313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.820 [2024-10-09 11:11:42.768317] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768321] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.820 [2024-10-09 11:11:42.768330] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768334] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.820 [2024-10-09 11:11:42.768338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.820 [2024-10-09 11:11:42.768344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.820 [2024-10-09 11:11:42.768354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.768458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.768467] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.768471] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768475] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.768485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.768499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.768509] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.768577] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.768584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.768587] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768591] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.768601] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768604] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768608] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.768615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.768624] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.768712] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.768718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.768722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.768737] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768741] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768744] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.768751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.768761] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.768861] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.768868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.768871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.768885] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768888] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.768892] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.768899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.768908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.769014] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.769021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.769024] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.769038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769045] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.769052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.769062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.769123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.769129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.769133] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.769146] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769154] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.769160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.769170] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.769315] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.769321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.769325] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.769340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.769348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.769354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.769364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.773472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.773481] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.773485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.773489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.773499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.773503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.773506] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f0b060) 00:31:22.821 [2024-10-09 11:11:42.773513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.821 [2024-10-09 11:11:42.773524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f77f00, cid 3, qid 0 00:31:22.821 [2024-10-09 11:11:42.773706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.821 [2024-10-09 11:11:42.773713] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.821 [2024-10-09 11:11:42.773716] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.821 [2024-10-09 11:11:42.773720] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f77f00) on tqpair=0x1f0b060 00:31:22.821 [2024-10-09 11:11:42.773727] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:31:22.821 00:31:22.821 11:11:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:22.821 [2024-10-09 11:11:42.813223] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:31:22.821 [2024-10-09 11:11:42.813279] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019429 ] 00:31:23.083 [2024-10-09 11:11:42.928805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:23.083 [2024-10-09 11:11:42.948027] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:23.083 [2024-10-09 11:11:42.948076] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:23.083 [2024-10-09 11:11:42.948081] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:23.083 [2024-10-09 11:11:42.948094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:23.083 [2024-10-09 11:11:42.948104] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:23.083 [2024-10-09 11:11:42.951668] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:23.083 [2024-10-09 11:11:42.951699] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11cc060 0 00:31:23.083 [2024-10-09 11:11:42.959477] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:23.083 [2024-10-09 11:11:42.959489] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:23.083 [2024-10-09 11:11:42.959493] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:23.083 [2024-10-09 11:11:42.959497] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:23.083 [2024-10-09 11:11:42.959520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.959526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.959530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.083 [2024-10-09 11:11:42.959542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:23.083 [2024-10-09 11:11:42.959559] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.083 [2024-10-09 11:11:42.967474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.083 [2024-10-09 11:11:42.967483] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.083 [2024-10-09 11:11:42.967487] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967492] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.083 [2024-10-09 11:11:42.967501] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:23.083 [2024-10-09 11:11:42.967507] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:23.083 [2024-10-09 11:11:42.967512] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:23.083 [2024-10-09 11:11:42.967524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.083 [2024-10-09 11:11:42.967540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.083 [2024-10-09 11:11:42.967553] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.083 [2024-10-09 11:11:42.967737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.083 [2024-10-09 11:11:42.967744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.083 [2024-10-09 11:11:42.967748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.083 [2024-10-09 11:11:42.967756] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:23.083 [2024-10-09 11:11:42.967764] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:23.083 [2024-10-09 11:11:42.967771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967775] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.967778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.083 [2024-10-09 11:11:42.967785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.083 [2024-10-09 11:11:42.967795] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.083 [2024-10-09 11:11:42.967995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.083 [2024-10-09 11:11:42.968001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.083 [2024-10-09 11:11:42.968005] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.083 [2024-10-09 11:11:42.968016] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:23.083 [2024-10-09 11:11:42.968025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:23.083 [2024-10-09 11:11:42.968031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.083 [2024-10-09 11:11:42.968045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.083 [2024-10-09 11:11:42.968056] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.083 [2024-10-09 11:11:42.968256] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.083 [2024-10-09 11:11:42.968262] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.083 [2024-10-09 11:11:42.968266] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968270] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.083 [2024-10-09 11:11:42.968274] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:23.083 [2024-10-09 11:11:42.968284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.083 [2024-10-09 11:11:42.968291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.083 [2024-10-09 11:11:42.968298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.083 [2024-10-09 11:11:42.968308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.083 [2024-10-09 11:11:42.968520] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.083 [2024-10-09 11:11:42.968527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.083 [2024-10-09 11:11:42.968531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:42.968539] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:23.084 [2024-10-09 11:11:42.968544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:23.084 [2024-10-09 11:11:42.968552] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:23.084 [2024-10-09 11:11:42.968657] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:23.084 [2024-10-09 11:11:42.968661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:23.084 [2024-10-09 11:11:42.968668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:42.968682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.084 [2024-10-09 11:11:42.968693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.084 [2024-10-09 11:11:42.968855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:42.968863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:42.968867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:42.968875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:23.084 [2024-10-09 11:11:42.968884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968888] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.968892] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:42.968899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.084 [2024-10-09 11:11:42.968909] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.084 [2024-10-09 11:11:42.969089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:42.969096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:42.969099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.969103] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:42.969108] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:23.084 [2024-10-09 11:11:42.969112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:42.969120] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:23.084 [2024-10-09 11:11:42.969131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:42.969139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.969143] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:42.969150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.084 [2024-10-09 11:11:42.969160] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.084 [2024-10-09 11:11:42.969343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.084 [2024-10-09 11:11:42.969350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.084 [2024-10-09 11:11:42.969353] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.969357] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=4096, cccid=0 00:31:23.084 [2024-10-09 11:11:42.969362] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1238a80) on tqpair(0x11cc060): expected_datao=0, payload_size=4096 00:31:23.084 [2024-10-09 11:11:42.969367] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.969378] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:42.969382] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:43.009652] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:43.009655] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009660] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:43.009667] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:23.084 [2024-10-09 11:11:43.009675] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:23.084 [2024-10-09 11:11:43.009680] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:23.084 [2024-10-09 11:11:43.009684] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:23.084 [2024-10-09 11:11:43.009688] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:23.084 [2024-10-09 11:11:43.009693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.009702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.009708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009712] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.009723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:23.084 [2024-10-09 11:11:43.009735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.084 [2024-10-09 11:11:43.009944] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:43.009951] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:43.009956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:43.009967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009971] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.009981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.084 [2024-10-09 11:11:43.009988] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009991] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.009995] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.010001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.084 [2024-10-09 11:11:43.010007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010011] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.010020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.084 [2024-10-09 11:11:43.010026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010030] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.010039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.084 [2024-10-09 11:11:43.010044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.010055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.010063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.010074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.084 [2024-10-09 11:11:43.010086] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238a80, cid 0, qid 0 00:31:23.084 [2024-10-09 11:11:43.010091] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238c00, cid 1, qid 0 00:31:23.084 [2024-10-09 11:11:43.010096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238d80, cid 2, qid 0 00:31:23.084 [2024-10-09 11:11:43.010101] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.084 [2024-10-09 11:11:43.010106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.084 [2024-10-09 11:11:43.010318] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:43.010324] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:43.010328] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.084 [2024-10-09 11:11:43.010337] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:23.084 [2024-10-09 11:11:43.010342] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.010350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.010361] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:23.084 [2024-10-09 11:11:43.010367] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.084 [2024-10-09 11:11:43.010375] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.084 [2024-10-09 11:11:43.010381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:23.084 [2024-10-09 11:11:43.010392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.084 [2024-10-09 11:11:43.010565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.084 [2024-10-09 11:11:43.010572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.084 [2024-10-09 11:11:43.010575] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.010579] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.085 [2024-10-09 11:11:43.010645] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:23.085 [2024-10-09 11:11:43.010654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:23.085 [2024-10-09 11:11:43.010662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.010666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.085 [2024-10-09 11:11:43.010672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.085 [2024-10-09 11:11:43.010683] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.085 [2024-10-09 11:11:43.010880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.085 [2024-10-09 11:11:43.010889] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.085 [2024-10-09 11:11:43.010893] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.010897] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=4096, cccid=4 00:31:23.085 [2024-10-09 11:11:43.010901] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239080) on tqpair(0x11cc060): expected_datao=0, payload_size=4096 00:31:23.085 [2024-10-09 11:11:43.010906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.010917] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.010921] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.085 [2024-10-09 11:11:43.055484] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.085 [2024-10-09 11:11:43.055488] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055492] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.085 [2024-10-09 11:11:43.055502] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:23.085 [2024-10-09 11:11:43.055512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:23.085 [2024-10-09 11:11:43.055521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:23.085 [2024-10-09 11:11:43.055528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.085 [2024-10-09 11:11:43.055538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.085 [2024-10-09 11:11:43.055550] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.085 [2024-10-09 11:11:43.055724] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.085 [2024-10-09 11:11:43.055730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.085 [2024-10-09 11:11:43.055734] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055737] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=4096, cccid=4 00:31:23.085 [2024-10-09 11:11:43.055742] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239080) on tqpair(0x11cc060): expected_datao=0, payload_size=4096 00:31:23.085 [2024-10-09 11:11:43.055746] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055760] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.085 [2024-10-09 11:11:43.055764] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.348 [2024-10-09 11:11:43.096669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.348 [2024-10-09 11:11:43.096673] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096677] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.348 [2024-10-09 11:11:43.096690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.096700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.096707] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096711] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.348 [2024-10-09 11:11:43.096718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-10-09 11:11:43.096731] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.348 [2024-10-09 11:11:43.096922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.348 [2024-10-09 11:11:43.096928] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.348 [2024-10-09 11:11:43.096932] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096935] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=4096, cccid=4 00:31:23.348 [2024-10-09 11:11:43.096940] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239080) on tqpair(0x11cc060): expected_datao=0, payload_size=4096 00:31:23.348 [2024-10-09 11:11:43.096944] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096957] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.096961] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.348 [2024-10-09 11:11:43.137659] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.348 [2024-10-09 11:11:43.137662] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.348 [2024-10-09 11:11:43.137674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137682] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137697] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137713] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:23.348 [2024-10-09 11:11:43.137718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:23.348 [2024-10-09 11:11:43.137723] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:23.348 [2024-10-09 11:11:43.137736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.348 [2024-10-09 11:11:43.137747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-10-09 11:11:43.137754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cc060) 00:31:23.348 [2024-10-09 11:11:43.137768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:23.348 [2024-10-09 11:11:43.137780] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.348 [2024-10-09 11:11:43.137785] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239200, cid 5, qid 0 00:31:23.348 [2024-10-09 11:11:43.137853] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.348 [2024-10-09 11:11:43.137861] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.348 [2024-10-09 11:11:43.137865] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.348 [2024-10-09 11:11:43.137875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.348 [2024-10-09 11:11:43.137881] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.348 [2024-10-09 11:11:43.137885] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239200) on tqpair=0x11cc060 00:31:23.348 [2024-10-09 11:11:43.137898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.137901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cc060) 00:31:23.348 [2024-10-09 11:11:43.137908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.348 [2024-10-09 11:11:43.137918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239200, cid 5, qid 0 00:31:23.348 [2024-10-09 11:11:43.138095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.348 [2024-10-09 11:11:43.138101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.348 [2024-10-09 11:11:43.138105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.348 [2024-10-09 11:11:43.138109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239200) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.138118] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138121] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239200, cid 5, qid 0 00:31:23.349 [2024-10-09 11:11:43.138355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.138361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.138364] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138368] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239200) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.138377] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138381] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138397] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239200, cid 5, qid 0 00:31:23.349 [2024-10-09 11:11:43.138655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.138662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.138666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138669] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239200) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.138683] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138706] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138720] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138723] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138739] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.138742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11cc060) 00:31:23.349 [2024-10-09 11:11:43.138749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.349 [2024-10-09 11:11:43.138760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239200, cid 5, qid 0 00:31:23.349 [2024-10-09 11:11:43.138765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239080, cid 4, qid 0 00:31:23.349 [2024-10-09 11:11:43.138770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239380, cid 6, qid 0 00:31:23.349 [2024-10-09 11:11:43.138775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239500, cid 7, qid 0 00:31:23.349 [2024-10-09 11:11:43.138995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.349 [2024-10-09 11:11:43.139001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.349 [2024-10-09 11:11:43.139005] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139009] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=8192, cccid=5 00:31:23.349 [2024-10-09 11:11:43.139013] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239200) on tqpair(0x11cc060): expected_datao=0, payload_size=8192 00:31:23.349 [2024-10-09 11:11:43.139018] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139085] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139089] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139095] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.349 [2024-10-09 11:11:43.139101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.349 [2024-10-09 11:11:43.139104] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139108] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=512, cccid=4 00:31:23.349 [2024-10-09 11:11:43.139113] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239080) on tqpair(0x11cc060): expected_datao=0, payload_size=512 00:31:23.349 [2024-10-09 11:11:43.139117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139131] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139135] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139141] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.349 [2024-10-09 11:11:43.139147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.349 [2024-10-09 11:11:43.139150] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139154] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=512, cccid=6 00:31:23.349 [2024-10-09 11:11:43.139158] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239380) on tqpair(0x11cc060): expected_datao=0, payload_size=512 00:31:23.349 [2024-10-09 11:11:43.139163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139169] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139174] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139180] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:23.349 [2024-10-09 11:11:43.139186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:23.349 [2024-10-09 11:11:43.139189] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139193] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11cc060): datao=0, datal=4096, cccid=7 00:31:23.349 [2024-10-09 11:11:43.139197] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1239500) on tqpair(0x11cc060): expected_datao=0, payload_size=4096 00:31:23.349 [2024-10-09 11:11:43.139202] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139208] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139212] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139222] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.139228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.139231] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139235] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239200) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.139247] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.139253] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.139256] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139260] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239080) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.139272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.139278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.139281] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139285] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239380) on tqpair=0x11cc060 00:31:23.349 [2024-10-09 11:11:43.139292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.349 [2024-10-09 11:11:43.139298] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.349 [2024-10-09 11:11:43.139301] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.349 [2024-10-09 11:11:43.139305] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239500) on tqpair=0x11cc060 00:31:23.349 ===================================================== 00:31:23.349 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.349 ===================================================== 00:31:23.349 Controller Capabilities/Features 00:31:23.349 ================================ 00:31:23.349 Vendor ID: 8086 00:31:23.349 Subsystem Vendor ID: 8086 00:31:23.349 Serial Number: SPDK00000000000001 00:31:23.349 Model Number: SPDK bdev Controller 00:31:23.349 Firmware Version: 25.01 00:31:23.349 Recommended Arb Burst: 6 00:31:23.349 IEEE OUI Identifier: e4 d2 5c 00:31:23.349 Multi-path I/O 00:31:23.349 May have multiple subsystem ports: Yes 00:31:23.349 May have multiple controllers: Yes 00:31:23.349 Associated with SR-IOV VF: No 00:31:23.349 Max Data Transfer Size: 131072 00:31:23.349 Max Number of Namespaces: 32 00:31:23.349 Max Number of I/O Queues: 127 00:31:23.349 NVMe Specification Version (VS): 1.3 00:31:23.349 NVMe Specification Version (Identify): 1.3 00:31:23.349 Maximum Queue Entries: 128 00:31:23.349 Contiguous Queues Required: Yes 00:31:23.349 Arbitration Mechanisms Supported 00:31:23.349 Weighted Round Robin: Not Supported 00:31:23.349 Vendor Specific: Not Supported 00:31:23.349 Reset Timeout: 15000 ms 00:31:23.349 Doorbell Stride: 4 bytes 00:31:23.349 NVM Subsystem Reset: Not Supported 00:31:23.349 Command Sets Supported 00:31:23.349 NVM Command Set: Supported 00:31:23.349 Boot Partition: Not Supported 00:31:23.349 Memory Page Size Minimum: 4096 bytes 00:31:23.349 Memory Page Size Maximum: 4096 bytes 00:31:23.349 Persistent Memory Region: Not Supported 00:31:23.349 Optional Asynchronous Events Supported 00:31:23.349 Namespace Attribute Notices: Supported 00:31:23.349 Firmware Activation Notices: Not Supported 00:31:23.349 ANA Change Notices: Not Supported 00:31:23.349 PLE Aggregate Log Change Notices: Not Supported 00:31:23.349 LBA Status Info Alert Notices: Not Supported 00:31:23.349 EGE Aggregate Log Change Notices: Not Supported 00:31:23.349 Normal NVM Subsystem Shutdown event: Not Supported 00:31:23.349 Zone Descriptor Change Notices: Not Supported 00:31:23.349 Discovery Log Change Notices: Not Supported 00:31:23.349 Controller Attributes 00:31:23.349 128-bit Host Identifier: Supported 00:31:23.349 Non-Operational Permissive Mode: Not Supported 00:31:23.349 NVM Sets: Not Supported 00:31:23.349 Read Recovery Levels: Not Supported 00:31:23.349 Endurance Groups: Not Supported 00:31:23.349 Predictable Latency Mode: Not Supported 00:31:23.350 Traffic Based Keep ALive: Not Supported 00:31:23.350 Namespace Granularity: Not Supported 00:31:23.350 SQ Associations: Not Supported 00:31:23.350 UUID List: Not Supported 00:31:23.350 Multi-Domain Subsystem: Not Supported 00:31:23.350 Fixed Capacity Management: Not Supported 00:31:23.350 Variable Capacity Management: Not Supported 00:31:23.350 Delete Endurance Group: Not Supported 00:31:23.350 Delete NVM Set: Not Supported 00:31:23.350 Extended LBA Formats Supported: Not Supported 00:31:23.350 Flexible Data Placement Supported: Not Supported 00:31:23.350 00:31:23.350 Controller Memory Buffer Support 00:31:23.350 ================================ 00:31:23.350 Supported: No 00:31:23.350 00:31:23.350 Persistent Memory Region Support 00:31:23.350 ================================ 00:31:23.350 Supported: No 00:31:23.350 00:31:23.350 Admin Command Set Attributes 00:31:23.350 ============================ 00:31:23.350 Security Send/Receive: Not Supported 00:31:23.350 Format NVM: Not Supported 00:31:23.350 Firmware Activate/Download: Not Supported 00:31:23.350 Namespace Management: Not Supported 00:31:23.350 Device Self-Test: Not Supported 00:31:23.350 Directives: Not Supported 00:31:23.350 NVMe-MI: Not Supported 00:31:23.350 Virtualization Management: Not Supported 00:31:23.350 Doorbell Buffer Config: Not Supported 00:31:23.350 Get LBA Status Capability: Not Supported 00:31:23.350 Command & Feature Lockdown Capability: Not Supported 00:31:23.350 Abort Command Limit: 4 00:31:23.350 Async Event Request Limit: 4 00:31:23.350 Number of Firmware Slots: N/A 00:31:23.350 Firmware Slot 1 Read-Only: N/A 00:31:23.350 Firmware Activation Without Reset: N/A 00:31:23.350 Multiple Update Detection Support: N/A 00:31:23.350 Firmware Update Granularity: No Information Provided 00:31:23.350 Per-Namespace SMART Log: No 00:31:23.350 Asymmetric Namespace Access Log Page: Not Supported 00:31:23.350 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:23.350 Command Effects Log Page: Supported 00:31:23.350 Get Log Page Extended Data: Supported 00:31:23.350 Telemetry Log Pages: Not Supported 00:31:23.350 Persistent Event Log Pages: Not Supported 00:31:23.350 Supported Log Pages Log Page: May Support 00:31:23.350 Commands Supported & Effects Log Page: Not Supported 00:31:23.350 Feature Identifiers & Effects Log Page:May Support 00:31:23.350 NVMe-MI Commands & Effects Log Page: May Support 00:31:23.350 Data Area 4 for Telemetry Log: Not Supported 00:31:23.350 Error Log Page Entries Supported: 128 00:31:23.350 Keep Alive: Supported 00:31:23.350 Keep Alive Granularity: 10000 ms 00:31:23.350 00:31:23.350 NVM Command Set Attributes 00:31:23.350 ========================== 00:31:23.350 Submission Queue Entry Size 00:31:23.350 Max: 64 00:31:23.350 Min: 64 00:31:23.350 Completion Queue Entry Size 00:31:23.350 Max: 16 00:31:23.350 Min: 16 00:31:23.350 Number of Namespaces: 32 00:31:23.350 Compare Command: Supported 00:31:23.350 Write Uncorrectable Command: Not Supported 00:31:23.350 Dataset Management Command: Supported 00:31:23.350 Write Zeroes Command: Supported 00:31:23.350 Set Features Save Field: Not Supported 00:31:23.350 Reservations: Supported 00:31:23.350 Timestamp: Not Supported 00:31:23.350 Copy: Supported 00:31:23.350 Volatile Write Cache: Present 00:31:23.350 Atomic Write Unit (Normal): 1 00:31:23.350 Atomic Write Unit (PFail): 1 00:31:23.350 Atomic Compare & Write Unit: 1 00:31:23.350 Fused Compare & Write: Supported 00:31:23.350 Scatter-Gather List 00:31:23.350 SGL Command Set: Supported 00:31:23.350 SGL Keyed: Supported 00:31:23.350 SGL Bit Bucket Descriptor: Not Supported 00:31:23.350 SGL Metadata Pointer: Not Supported 00:31:23.350 Oversized SGL: Not Supported 00:31:23.350 SGL Metadata Address: Not Supported 00:31:23.350 SGL Offset: Supported 00:31:23.350 Transport SGL Data Block: Not Supported 00:31:23.350 Replay Protected Memory Block: Not Supported 00:31:23.350 00:31:23.350 Firmware Slot Information 00:31:23.350 ========================= 00:31:23.350 Active slot: 1 00:31:23.350 Slot 1 Firmware Revision: 25.01 00:31:23.350 00:31:23.350 00:31:23.350 Commands Supported and Effects 00:31:23.350 ============================== 00:31:23.350 Admin Commands 00:31:23.350 -------------- 00:31:23.350 Get Log Page (02h): Supported 00:31:23.350 Identify (06h): Supported 00:31:23.350 Abort (08h): Supported 00:31:23.350 Set Features (09h): Supported 00:31:23.350 Get Features (0Ah): Supported 00:31:23.350 Asynchronous Event Request (0Ch): Supported 00:31:23.350 Keep Alive (18h): Supported 00:31:23.350 I/O Commands 00:31:23.350 ------------ 00:31:23.350 Flush (00h): Supported LBA-Change 00:31:23.350 Write (01h): Supported LBA-Change 00:31:23.350 Read (02h): Supported 00:31:23.350 Compare (05h): Supported 00:31:23.350 Write Zeroes (08h): Supported LBA-Change 00:31:23.350 Dataset Management (09h): Supported LBA-Change 00:31:23.350 Copy (19h): Supported LBA-Change 00:31:23.350 00:31:23.350 Error Log 00:31:23.350 ========= 00:31:23.350 00:31:23.350 Arbitration 00:31:23.350 =========== 00:31:23.350 Arbitration Burst: 1 00:31:23.350 00:31:23.350 Power Management 00:31:23.350 ================ 00:31:23.350 Number of Power States: 1 00:31:23.350 Current Power State: Power State #0 00:31:23.350 Power State #0: 00:31:23.350 Max Power: 0.00 W 00:31:23.350 Non-Operational State: Operational 00:31:23.350 Entry Latency: Not Reported 00:31:23.350 Exit Latency: Not Reported 00:31:23.350 Relative Read Throughput: 0 00:31:23.350 Relative Read Latency: 0 00:31:23.350 Relative Write Throughput: 0 00:31:23.350 Relative Write Latency: 0 00:31:23.350 Idle Power: Not Reported 00:31:23.350 Active Power: Not Reported 00:31:23.350 Non-Operational Permissive Mode: Not Supported 00:31:23.350 00:31:23.350 Health Information 00:31:23.350 ================== 00:31:23.350 Critical Warnings: 00:31:23.350 Available Spare Space: OK 00:31:23.350 Temperature: OK 00:31:23.350 Device Reliability: OK 00:31:23.350 Read Only: No 00:31:23.350 Volatile Memory Backup: OK 00:31:23.350 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:23.350 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:23.350 Available Spare: 0% 00:31:23.350 Available Spare Threshold: 0% 00:31:23.350 Life Percentage Used:[2024-10-09 11:11:43.139398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.139404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11cc060) 00:31:23.350 [2024-10-09 11:11:43.139410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.350 [2024-10-09 11:11:43.139422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1239500, cid 7, qid 0 00:31:23.350 [2024-10-09 11:11:43.143472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.350 [2024-10-09 11:11:43.143480] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.350 [2024-10-09 11:11:43.143483] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143488] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1239500) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143519] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:23.350 [2024-10-09 11:11:43.143528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238a80) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.350 [2024-10-09 11:11:43.143540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238c00) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.350 [2024-10-09 11:11:43.143552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238d80) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.350 [2024-10-09 11:11:43.143562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:23.350 [2024-10-09 11:11:43.143574] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.350 [2024-10-09 11:11:43.143589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.350 [2024-10-09 11:11:43.143601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.350 [2024-10-09 11:11:43.143819] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.350 [2024-10-09 11:11:43.143825] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.350 [2024-10-09 11:11:43.143829] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143833] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.350 [2024-10-09 11:11:43.143839] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.350 [2024-10-09 11:11:43.143847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.350 [2024-10-09 11:11:43.143853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.350 [2024-10-09 11:11:43.143866] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.350 [2024-10-09 11:11:43.144047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.144053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.144056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.144065] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:23.351 [2024-10-09 11:11:43.144070] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:23.351 [2024-10-09 11:11:43.144079] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144083] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144086] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.144093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.144103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.144321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.144328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.144331] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.144345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.144362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.144372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.144572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.144579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.144583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144587] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.144596] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.144611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.144621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.144874] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.144880] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.144883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.144897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.144904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.144911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.144921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.145100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.145107] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.145110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145114] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.145123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145127] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145131] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.145138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.145148] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.145379] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.145386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.145389] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.145403] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145410] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.145419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.145429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.145632] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.145639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.145642] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.145656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145660] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.145670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.145680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.145882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.145888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.145892] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145896] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.145905] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.145913] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.145919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.145929] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.146140] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.146147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.146150] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146154] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.146163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.146178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.146187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.146386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.146393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.146396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146400] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.146409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146417] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.146425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.146436] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.146690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.146696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.146700] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.146713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.146727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.146738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.146941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.146947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.351 [2024-10-09 11:11:43.146950] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146954] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.351 [2024-10-09 11:11:43.146964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146968] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.351 [2024-10-09 11:11:43.146971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.351 [2024-10-09 11:11:43.146978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.351 [2024-10-09 11:11:43.146988] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.351 [2024-10-09 11:11:43.147146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.351 [2024-10-09 11:11:43.147153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.352 [2024-10-09 11:11:43.147156] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.147160] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.352 [2024-10-09 11:11:43.147170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.147174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.147177] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.352 [2024-10-09 11:11:43.147184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.352 [2024-10-09 11:11:43.147194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.352 [2024-10-09 11:11:43.147446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.352 [2024-10-09 11:11:43.147453] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.352 [2024-10-09 11:11:43.147456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.147460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.352 [2024-10-09 11:11:43.151475] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.151482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.151485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11cc060) 00:31:23.352 [2024-10-09 11:11:43.151492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:23.352 [2024-10-09 11:11:43.151507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1238f00, cid 3, qid 0 00:31:23.352 [2024-10-09 11:11:43.151684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:23.352 [2024-10-09 11:11:43.151690] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:23.352 [2024-10-09 11:11:43.151694] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:23.352 [2024-10-09 11:11:43.151697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1238f00) on tqpair=0x11cc060 00:31:23.352 [2024-10-09 11:11:43.151705] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:31:23.352 0% 00:31:23.352 Data Units Read: 0 00:31:23.352 Data Units Written: 0 00:31:23.352 Host Read Commands: 0 00:31:23.352 Host Write Commands: 0 00:31:23.352 Controller Busy Time: 0 minutes 00:31:23.352 Power Cycles: 0 00:31:23.352 Power On Hours: 0 hours 00:31:23.352 Unsafe Shutdowns: 0 00:31:23.352 Unrecoverable Media Errors: 0 00:31:23.352 Lifetime Error Log Entries: 0 00:31:23.352 Warning Temperature Time: 0 minutes 00:31:23.352 Critical Temperature Time: 0 minutes 00:31:23.352 00:31:23.352 Number of Queues 00:31:23.352 ================ 00:31:23.352 Number of I/O Submission Queues: 127 00:31:23.352 Number of I/O Completion Queues: 127 00:31:23.352 00:31:23.352 Active Namespaces 00:31:23.352 ================= 00:31:23.352 Namespace ID:1 00:31:23.352 Error Recovery Timeout: Unlimited 00:31:23.352 Command Set Identifier: NVM (00h) 00:31:23.352 Deallocate: Supported 00:31:23.352 Deallocated/Unwritten Error: Not Supported 00:31:23.352 Deallocated Read Value: Unknown 00:31:23.352 Deallocate in Write Zeroes: Not Supported 00:31:23.352 Deallocated Guard Field: 0xFFFF 00:31:23.352 Flush: Supported 00:31:23.352 Reservation: Supported 00:31:23.352 Namespace Sharing Capabilities: Multiple Controllers 00:31:23.352 Size (in LBAs): 131072 (0GiB) 00:31:23.352 Capacity (in LBAs): 131072 (0GiB) 00:31:23.352 Utilization (in LBAs): 131072 (0GiB) 00:31:23.352 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:23.352 EUI64: ABCDEF0123456789 00:31:23.352 UUID: 0e836a8d-f251-49c2-b927-bfce9436f94d 00:31:23.352 Thin Provisioning: Not Supported 00:31:23.352 Per-NS Atomic Units: Yes 00:31:23.352 Atomic Boundary Size (Normal): 0 00:31:23.352 Atomic Boundary Size (PFail): 0 00:31:23.352 Atomic Boundary Offset: 0 00:31:23.352 Maximum Single Source Range Length: 65535 00:31:23.352 Maximum Copy Length: 65535 00:31:23.352 Maximum Source Range Count: 1 00:31:23.352 NGUID/EUI64 Never Reused: No 00:31:23.352 Namespace Write Protected: No 00:31:23.352 Number of LBA Formats: 1 00:31:23.352 Current LBA Format: LBA Format #00 00:31:23.352 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:23.352 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.352 rmmod nvme_tcp 00:31:23.352 rmmod nvme_fabrics 00:31:23.352 rmmod nvme_keyring 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2019159 ']' 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2019159 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2019159 ']' 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2019159 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2019159 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2019159' 00:31:23.352 killing process with pid 2019159 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2019159 00:31:23.352 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2019159 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.613 11:11:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.524 11:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:25.524 00:31:25.524 real 0m12.013s 00:31:25.524 user 0m9.484s 00:31:25.524 sys 0m6.152s 00:31:25.524 11:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:25.524 11:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:25.524 ************************************ 00:31:25.524 END TEST nvmf_identify 00:31:25.524 ************************************ 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.784 ************************************ 00:31:25.784 START TEST nvmf_perf 00:31:25.784 ************************************ 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:25.784 * Looking for test storage... 00:31:25.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:25.784 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.048 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:26.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.048 --rc genhtml_branch_coverage=1 00:31:26.048 --rc genhtml_function_coverage=1 00:31:26.048 --rc genhtml_legend=1 00:31:26.048 --rc geninfo_all_blocks=1 00:31:26.048 --rc geninfo_unexecuted_blocks=1 00:31:26.048 00:31:26.049 ' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:26.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.049 --rc genhtml_branch_coverage=1 00:31:26.049 --rc genhtml_function_coverage=1 00:31:26.049 --rc genhtml_legend=1 00:31:26.049 --rc geninfo_all_blocks=1 00:31:26.049 --rc geninfo_unexecuted_blocks=1 00:31:26.049 00:31:26.049 ' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:26.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.049 --rc genhtml_branch_coverage=1 00:31:26.049 --rc genhtml_function_coverage=1 00:31:26.049 --rc genhtml_legend=1 00:31:26.049 --rc geninfo_all_blocks=1 00:31:26.049 --rc geninfo_unexecuted_blocks=1 00:31:26.049 00:31:26.049 ' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:26.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.049 --rc genhtml_branch_coverage=1 00:31:26.049 --rc genhtml_function_coverage=1 00:31:26.049 --rc genhtml_legend=1 00:31:26.049 --rc geninfo_all_blocks=1 00:31:26.049 --rc geninfo_unexecuted_blocks=1 00:31:26.049 00:31:26.049 ' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:26.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.049 11:11:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:32.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:32.635 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.635 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:32.636 Found net devices under 0000:31:00.0: cvl_0_0 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:32.636 Found net devices under 0000:31:00.1: cvl_0_1 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.636 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:31:32.896 00:31:32.896 --- 10.0.0.2 ping statistics --- 00:31:32.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.896 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:31:32.896 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:31:32.896 00:31:32.897 --- 10.0.0.1 ping statistics --- 00:31:32.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.897 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2023811 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2023811 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2023811 ']' 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.897 11:11:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:33.158 [2024-10-09 11:11:52.911847] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:31:33.158 [2024-10-09 11:11:52.911897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.158 [2024-10-09 11:11:53.050690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:33.158 [2024-10-09 11:11:53.081994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.158 [2024-10-09 11:11:53.099831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.158 [2024-10-09 11:11:53.099862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.158 [2024-10-09 11:11:53.099870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.158 [2024-10-09 11:11:53.099880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.158 [2024-10-09 11:11:53.099886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.158 [2024-10-09 11:11:53.101400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.158 [2024-10-09 11:11:53.101531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.158 [2024-10-09 11:11:53.101587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.158 [2024-10-09 11:11:53.101588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:34.099 11:11:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:34.360 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:34.360 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:34.621 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:34.621 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.882 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:34.882 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:34.882 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:34.882 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:34.882 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:34.882 [2024-10-09 11:11:54.856032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.143 11:11:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.143 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:35.143 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.403 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:35.403 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:35.664 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.664 [2024-10-09 11:11:55.589046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.664 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:35.924 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:35.925 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:35.925 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:35.925 11:11:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:37.308 Initializing NVMe Controllers 00:31:37.308 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:37.308 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:37.308 Initialization complete. Launching workers. 00:31:37.308 ======================================================== 00:31:37.308 Latency(us) 00:31:37.308 Device Information : IOPS MiB/s Average min max 00:31:37.308 PCIE (0000:65:00.0) NSID 1 from core 0: 79091.91 308.95 403.97 13.27 4992.48 00:31:37.308 ======================================================== 00:31:37.308 Total : 79091.91 308.95 403.97 13.27 4992.48 00:31:37.308 00:31:37.308 11:11:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.691 Initializing NVMe Controllers 00:31:38.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:38.691 Initialization complete. Launching workers. 00:31:38.691 ======================================================== 00:31:38.691 Latency(us) 00:31:38.691 Device Information : IOPS MiB/s Average min max 00:31:38.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.00 0.30 13114.81 145.74 46040.35 00:31:38.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19744.09 7982.27 48630.49 00:31:38.691 ======================================================== 00:31:38.691 Total : 129.00 0.50 15735.69 145.74 48630.49 00:31:38.691 00:31:38.691 11:11:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.074 Initializing NVMe Controllers 00:31:40.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:40.074 Initialization complete. Launching workers. 00:31:40.074 ======================================================== 00:31:40.074 Latency(us) 00:31:40.074 Device Information : IOPS MiB/s Average min max 00:31:40.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10354.60 40.45 3090.84 527.98 6556.55 00:31:40.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3672.80 14.35 8742.04 5156.11 16174.04 00:31:40.074 ======================================================== 00:31:40.074 Total : 14027.40 54.79 4570.50 527.98 16174.04 00:31:40.074 00:31:40.074 11:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:40.074 11:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:40.074 11:11:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:42.616 Initializing NVMe Controllers 00:31:42.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.616 Controller IO queue size 128, less than required. 00:31:42.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.616 Controller IO queue size 128, less than required. 00:31:42.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:42.616 Initialization complete. Launching workers. 00:31:42.616 ======================================================== 00:31:42.616 Latency(us) 00:31:42.616 Device Information : IOPS MiB/s Average min max 00:31:42.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1637.56 409.39 79412.33 51643.10 151358.57 00:31:42.616 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.78 152.94 223213.78 46803.25 360700.67 00:31:42.616 ======================================================== 00:31:42.616 Total : 2249.34 562.33 118523.78 46803.25 360700.67 00:31:42.616 00:31:42.616 11:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:43.188 No valid NVMe controllers or AIO or URING devices found 00:31:43.188 Initializing NVMe Controllers 00:31:43.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.188 Controller IO queue size 128, less than required. 00:31:43.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.188 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:43.188 Controller IO queue size 128, less than required. 00:31:43.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.188 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:43.188 WARNING: Some requested NVMe devices were skipped 00:31:43.188 11:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:46.485 Initializing NVMe Controllers 00:31:46.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.485 Controller IO queue size 128, less than required. 00:31:46.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.485 Controller IO queue size 128, less than required. 00:31:46.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:46.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:46.485 Initialization complete. Launching workers. 00:31:46.485 00:31:46.485 ==================== 00:31:46.485 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:46.485 TCP transport: 00:31:46.485 polls: 19705 00:31:46.485 idle_polls: 10558 00:31:46.485 sock_completions: 9147 00:31:46.485 nvme_completions: 6445 00:31:46.485 submitted_requests: 9650 00:31:46.485 queued_requests: 1 00:31:46.485 00:31:46.485 ==================== 00:31:46.485 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:46.485 TCP transport: 00:31:46.485 polls: 19436 00:31:46.485 idle_polls: 10727 00:31:46.485 sock_completions: 8709 00:31:46.485 nvme_completions: 6843 00:31:46.485 submitted_requests: 10372 00:31:46.485 queued_requests: 1 00:31:46.485 ======================================================== 00:31:46.485 Latency(us) 00:31:46.485 Device Information : IOPS MiB/s Average min max 00:31:46.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1610.10 402.53 81117.75 41698.93 149792.57 00:31:46.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1709.55 427.39 76108.29 36749.48 142397.07 00:31:46.485 ======================================================== 00:31:46.485 Total : 3319.65 829.91 78537.99 36749.48 149792.57 00:31:46.485 00:31:46.485 11:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:46.485 11:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.485 11:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:46.485 11:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:46.485 11:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c74a577d-b57e-4cd5-9d61-21b81cab64e2 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c74a577d-b57e-4cd5-9d61-21b81cab64e2 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=c74a577d-b57e-4cd5-9d61-21b81cab64e2 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:47.056 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:47.317 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:47.317 { 00:31:47.317 "uuid": "c74a577d-b57e-4cd5-9d61-21b81cab64e2", 00:31:47.317 "name": "lvs_0", 00:31:47.317 "base_bdev": "Nvme0n1", 00:31:47.317 "total_data_clusters": 457407, 00:31:47.317 "free_clusters": 457407, 00:31:47.317 "block_size": 512, 00:31:47.317 "cluster_size": 4194304 00:31:47.317 } 00:31:47.317 ]' 00:31:47.317 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c74a577d-b57e-4cd5-9d61-21b81cab64e2") .free_clusters' 00:31:47.317 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:31:47.317 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="c74a577d-b57e-4cd5-9d61-21b81cab64e2") .cluster_size' 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:31:47.577 1829628 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c74a577d-b57e-4cd5-9d61-21b81cab64e2 lbd_0 20480 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=862c5c00-2d9f-4ecb-9353-eb8d0e3b06d3 00:31:47.577 11:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 862c5c00-2d9f-4ecb-9353-eb8d0e3b06d3 lvs_n_0 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=04179847-80a9-4dca-841a-f204d029839b 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 04179847-80a9-4dca-841a-f204d029839b 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=04179847-80a9-4dca-841a-f204d029839b 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:49.487 { 00:31:49.487 "uuid": "c74a577d-b57e-4cd5-9d61-21b81cab64e2", 00:31:49.487 "name": "lvs_0", 00:31:49.487 "base_bdev": "Nvme0n1", 00:31:49.487 "total_data_clusters": 457407, 00:31:49.487 "free_clusters": 452287, 00:31:49.487 "block_size": 512, 00:31:49.487 "cluster_size": 4194304 00:31:49.487 }, 00:31:49.487 { 00:31:49.487 "uuid": "04179847-80a9-4dca-841a-f204d029839b", 00:31:49.487 "name": "lvs_n_0", 00:31:49.487 "base_bdev": "862c5c00-2d9f-4ecb-9353-eb8d0e3b06d3", 00:31:49.487 "total_data_clusters": 5114, 00:31:49.487 "free_clusters": 5114, 00:31:49.487 "block_size": 512, 00:31:49.487 "cluster_size": 4194304 00:31:49.487 } 00:31:49.487 ]' 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="04179847-80a9-4dca-841a-f204d029839b") .free_clusters' 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="04179847-80a9-4dca-841a-f204d029839b") .cluster_size' 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:31:49.487 20456 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:49.487 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04179847-80a9-4dca-841a-f204d029839b lbd_nest_0 20456 00:31:49.747 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=250176f8-b7a1-41c9-85a4-3afd1a014254 00:31:49.747 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:50.007 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:50.007 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 250176f8-b7a1-41c9-85a4-3afd1a014254 00:31:50.007 11:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.267 11:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:50.267 11:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:50.267 11:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:50.267 11:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:50.267 11:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:02.497 Initializing NVMe Controllers 00:32:02.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.497 Initialization complete. Launching workers. 00:32:02.497 ======================================================== 00:32:02.498 Latency(us) 00:32:02.498 Device Information : IOPS MiB/s Average min max 00:32:02.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.20 0.02 21729.62 124.78 46336.78 00:32:02.498 ======================================================== 00:32:02.498 Total : 46.20 0.02 21729.62 124.78 46336.78 00:32:02.498 00:32:02.498 11:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:02.498 11:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:12.608 Initializing NVMe Controllers 00:32:12.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:12.608 Initialization complete. Launching workers. 00:32:12.608 ======================================================== 00:32:12.608 Latency(us) 00:32:12.608 Device Information : IOPS MiB/s Average min max 00:32:12.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 60.60 7.58 16514.36 5063.19 51998.83 00:32:12.608 ======================================================== 00:32:12.608 Total : 60.60 7.58 16514.36 5063.19 51998.83 00:32:12.608 00:32:12.608 11:12:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:12.608 11:12:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:12.608 11:12:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.606 Initializing NVMe Controllers 00:32:22.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.606 Initialization complete. Launching workers. 00:32:22.606 ======================================================== 00:32:22.606 Latency(us) 00:32:22.606 Device Information : IOPS MiB/s Average min max 00:32:22.606 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8682.84 4.24 3685.83 279.81 7116.84 00:32:22.606 ======================================================== 00:32:22.606 Total : 8682.84 4.24 3685.83 279.81 7116.84 00:32:22.606 00:32:22.606 11:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.606 11:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.608 Initializing NVMe Controllers 00:32:32.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.608 Initialization complete. Launching workers. 00:32:32.608 ======================================================== 00:32:32.608 Latency(us) 00:32:32.608 Device Information : IOPS MiB/s Average min max 00:32:32.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3959.14 494.89 8083.76 621.58 21400.78 00:32:32.608 ======================================================== 00:32:32.608 Total : 3959.14 494.89 8083.76 621.58 21400.78 00:32:32.608 00:32:32.608 11:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:32.608 11:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:32.608 11:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.605 Initializing NVMe Controllers 00:32:42.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.605 Controller IO queue size 128, less than required. 00:32:42.605 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.605 Initialization complete. Launching workers. 00:32:42.605 ======================================================== 00:32:42.605 Latency(us) 00:32:42.605 Device Information : IOPS MiB/s Average min max 00:32:42.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15825.22 7.73 8088.30 2023.00 19333.85 00:32:42.605 ======================================================== 00:32:42.605 Total : 15825.22 7.73 8088.30 2023.00 19333.85 00:32:42.605 00:32:42.605 11:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:42.605 11:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:54.834 Initializing NVMe Controllers 00:32:54.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.834 Controller IO queue size 128, less than required. 00:32:54.835 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:54.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:54.835 Initialization complete. Launching workers. 00:32:54.835 ======================================================== 00:32:54.835 Latency(us) 00:32:54.835 Device Information : IOPS MiB/s Average min max 00:32:54.835 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1165.98 145.75 110180.05 15310.78 257298.33 00:32:54.835 ======================================================== 00:32:54.835 Total : 1165.98 145.75 110180.05 15310.78 257298.33 00:32:54.835 00:32:54.835 11:13:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:54.835 11:13:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 250176f8-b7a1-41c9-85a4-3afd1a014254 00:32:54.835 11:13:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:54.835 11:13:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 862c5c00-2d9f-4ecb-9353-eb8d0e3b06d3 00:32:55.095 11:13:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.356 rmmod nvme_tcp 00:32:55.356 rmmod nvme_fabrics 00:32:55.356 rmmod nvme_keyring 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2023811 ']' 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2023811 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2023811 ']' 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2023811 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2023811 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2023811' 00:32:55.356 killing process with pid 2023811 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2023811 00:32:55.356 11:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2023811 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.269 11:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.813 00:32:59.813 real 1m33.694s 00:32:59.813 user 5m32.323s 00:32:59.813 sys 0m15.453s 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:59.813 ************************************ 00:32:59.813 END TEST nvmf_perf 00:32:59.813 ************************************ 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.813 ************************************ 00:32:59.813 START TEST nvmf_fio_host 00:32:59.813 ************************************ 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:59.813 * Looking for test storage... 00:32:59.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.813 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.813 --rc genhtml_branch_coverage=1 00:32:59.813 --rc genhtml_function_coverage=1 00:32:59.813 --rc genhtml_legend=1 00:32:59.813 --rc geninfo_all_blocks=1 00:32:59.813 --rc geninfo_unexecuted_blocks=1 00:32:59.813 00:32:59.813 ' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.814 --rc genhtml_branch_coverage=1 00:32:59.814 --rc genhtml_function_coverage=1 00:32:59.814 --rc genhtml_legend=1 00:32:59.814 --rc geninfo_all_blocks=1 00:32:59.814 --rc geninfo_unexecuted_blocks=1 00:32:59.814 00:32:59.814 ' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.814 --rc genhtml_branch_coverage=1 00:32:59.814 --rc genhtml_function_coverage=1 00:32:59.814 --rc genhtml_legend=1 00:32:59.814 --rc geninfo_all_blocks=1 00:32:59.814 --rc geninfo_unexecuted_blocks=1 00:32:59.814 00:32:59.814 ' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.814 --rc genhtml_branch_coverage=1 00:32:59.814 --rc genhtml_function_coverage=1 00:32:59.814 --rc genhtml_legend=1 00:32:59.814 --rc geninfo_all_blocks=1 00:32:59.814 --rc geninfo_unexecuted_blocks=1 00:32:59.814 00:32:59.814 ' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:59.814 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.815 11:13:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:07.953 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:07.953 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:07.953 Found net devices under 0000:31:00.0: cvl_0_0 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:07.953 Found net devices under 0000:31:00.1: cvl_0_1 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.953 11:13:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.953 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:33:07.953 00:33:07.953 --- 10.0.0.2 ping statistics --- 00:33:07.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.954 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:33:07.954 00:33:07.954 --- 10.0.0.1 ping statistics --- 00:33:07.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.954 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2044520 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2044520 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2044520 ']' 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.954 11:13:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.954 [2024-10-09 11:13:27.326744] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:33:07.954 [2024-10-09 11:13:27.326809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:07.954 [2024-10-09 11:13:27.468839] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:07.954 [2024-10-09 11:13:27.501139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:07.954 [2024-10-09 11:13:27.523893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:07.954 [2024-10-09 11:13:27.523936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:07.954 [2024-10-09 11:13:27.523944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:07.954 [2024-10-09 11:13:27.523951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:07.954 [2024-10-09 11:13:27.523963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:07.954 [2024-10-09 11:13:27.525696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.954 [2024-10-09 11:13:27.525874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:07.954 [2024-10-09 11:13:27.526032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:07.954 [2024-10-09 11:13:27.526033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.216 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.216 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:33:08.216 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:08.477 [2024-10-09 11:13:28.278886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.477 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:08.477 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:08.477 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.477 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:08.740 Malloc1 00:33:08.740 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.740 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:09.011 11:13:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.331 [2024-10-09 11:13:29.070906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:09.331 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:09.623 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:09.623 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:09.623 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:09.623 11:13:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.883 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:09.883 fio-3.35 00:33:09.883 Starting 1 thread 00:33:12.460 00:33:12.460 test: (groupid=0, jobs=1): err= 0: pid=2045190: Wed Oct 9 11:13:32 2024 00:33:12.460 read: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec) 00:33:12.460 slat (usec): min=2, max=280, avg= 2.15, stdev= 2.36 00:33:12.460 clat (usec): min=3252, max=8931, avg=5088.19, stdev=355.46 00:33:12.460 lat (usec): min=3254, max=8933, avg=5090.34, stdev=355.52 00:33:12.460 clat percentiles (usec): 00:33:12.460 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:33:12.460 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:33:12.460 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5604], 00:33:12.460 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6980], 99.95th=[ 7635], 00:33:12.460 | 99.99th=[ 8848] 00:33:12.460 bw ( KiB/s): min=53836, max=55872, per=99.94%, avg=55261.00, stdev=955.31, samples=4 00:33:12.460 iops : min=13459, max=13968, avg=13815.25, stdev=238.83, samples=4 00:33:12.460 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec); 0 zone resets 00:33:12.460 slat (usec): min=2, max=271, avg= 2.22, stdev= 1.79 00:33:12.460 clat (usec): min=2729, max=7681, avg=4107.93, stdev=299.11 00:33:12.460 lat (usec): min=2731, max=7683, avg=4110.14, stdev=299.21 00:33:12.460 clat percentiles (usec): 00:33:12.460 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:33:12.460 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:33:12.460 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:33:12.460 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 6390], 99.95th=[ 6980], 00:33:12.460 | 99.99th=[ 7635] 00:33:12.460 bw ( KiB/s): min=54139, max=55808, per=99.90%, avg=55224.75, stdev=746.59, samples=4 00:33:12.460 iops : min=13534, max=13952, avg=13806.00, stdev=187.01, samples=4 00:33:12.460 lat (msec) : 4=17.36%, 10=82.64% 00:33:12.460 cpu : usr=78.29%, sys=20.31%, ctx=30, majf=0, minf=20 00:33:12.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:12.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:12.460 issued rwts: total=27703,27694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:12.460 00:33:12.460 Run status group 0 (all jobs): 00:33:12.460 READ: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:33:12.460 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:12.460 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:12.461 11:13:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:12.731 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:12.731 fio-3.35 00:33:12.731 Starting 1 thread 00:33:15.273 00:33:15.274 test: (groupid=0, jobs=1): err= 0: pid=2045879: Wed Oct 9 11:13:35 2024 00:33:15.274 read: IOPS=9164, BW=143MiB/s (150MB/s)(287MiB/2002msec) 00:33:15.274 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.65 00:33:15.274 clat (usec): min=566, max=53289, avg=8678.16, stdev=4020.29 00:33:15.274 lat (usec): min=573, max=53292, avg=8681.75, stdev=4020.37 00:33:15.274 clat percentiles (usec): 00:33:15.274 | 1.00th=[ 4080], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6456], 00:33:15.274 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8848], 00:33:15.274 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[11207], 95.00th=[11863], 00:33:15.274 | 99.00th=[15270], 99.50th=[45351], 99.90th=[52167], 99.95th=[52691], 00:33:15.274 | 99.99th=[53216] 00:33:15.274 bw ( KiB/s): min=66208, max=82880, per=49.04%, avg=71904.00, stdev=7623.48, samples=4 00:33:15.274 iops : min= 4138, max= 5180, avg=4494.00, stdev=476.47, samples=4 00:33:15.274 write: IOPS=5606, BW=87.6MiB/s (91.9MB/s)(146MiB/1669msec); 0 zone resets 00:33:15.274 slat (usec): min=39, max=404, avg=41.08, stdev= 8.60 00:33:15.274 clat (usec): min=2275, max=16622, avg=9375.86, stdev=1686.72 00:33:15.274 lat (usec): min=2315, max=16662, avg=9416.94, stdev=1688.93 00:33:15.274 clat percentiles (usec): 00:33:15.274 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7439], 20.00th=[ 7963], 00:33:15.274 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:15.274 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11469], 95.00th=[12256], 00:33:15.274 | 99.00th=[14877], 99.50th=[15401], 99.90th=[16319], 99.95th=[16319], 00:33:15.274 | 99.99th=[16581] 00:33:15.274 bw ( KiB/s): min=69472, max=86400, per=83.45%, avg=74864.00, stdev=7883.72, samples=4 00:33:15.274 iops : min= 4342, max= 5400, avg=4679.00, stdev=492.73, samples=4 00:33:15.274 lat (usec) : 750=0.01% 00:33:15.274 lat (msec) : 2=0.04%, 4=0.56%, 10=70.03%, 20=28.92%, 50=0.24% 00:33:15.274 lat (msec) : 100=0.22% 00:33:15.274 cpu : usr=84.16%, sys=14.44%, ctx=17, majf=0, minf=42 00:33:15.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:33:15.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:15.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:15.274 issued rwts: total=18347,9358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:15.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:15.274 00:33:15.274 Run status group 0 (all jobs): 00:33:15.274 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2002-2002msec 00:33:15.274 WRITE: bw=87.6MiB/s (91.9MB/s), 87.6MiB/s-87.6MiB/s (91.9MB/s-91.9MB/s), io=146MiB (153MB), run=1669-1669msec 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:15.274 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:15.534 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:15.534 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:33:15.534 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:15.794 Nvme0n1 00:33:15.794 11:13:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=0ea1fca2-c2ac-4160-926d-a6f6ceafb5df 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 0ea1fca2-c2ac-4160-926d-a6f6ceafb5df 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=0ea1fca2-c2ac-4160-926d-a6f6ceafb5df 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:16.739 { 00:33:16.739 "uuid": "0ea1fca2-c2ac-4160-926d-a6f6ceafb5df", 00:33:16.739 "name": "lvs_0", 00:33:16.739 "base_bdev": "Nvme0n1", 00:33:16.739 "total_data_clusters": 1787, 00:33:16.739 "free_clusters": 1787, 00:33:16.739 "block_size": 512, 00:33:16.739 "cluster_size": 1073741824 00:33:16.739 } 00:33:16.739 ]' 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0ea1fca2-c2ac-4160-926d-a6f6ceafb5df") .free_clusters' 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0ea1fca2-c2ac-4160-926d-a6f6ceafb5df") .cluster_size' 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:33:16.739 1829888 00:33:16.739 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:16.999 decca5b4-bae5-4c28-a158-1d559618a48d 00:33:17.000 11:13:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:17.259 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:17.259 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:17.519 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:17.520 11:13:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:17.780 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:17.780 fio-3.35 00:33:17.780 Starting 1 thread 00:33:20.322 00:33:20.322 test: (groupid=0, jobs=1): err= 0: pid=2047075: Wed Oct 9 11:13:40 2024 00:33:20.322 read: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(81.1MiB/2005msec) 00:33:20.322 slat (usec): min=2, max=112, avg= 2.21, stdev= 1.12 00:33:20.322 clat (usec): min=2521, max=11680, avg=6819.34, stdev=512.32 00:33:20.322 lat (usec): min=2538, max=11682, avg=6821.55, stdev=512.27 00:33:20.322 clat percentiles (usec): 00:33:20.322 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:33:20.322 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:33:20.322 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7570], 00:33:20.322 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 9634], 99.95th=[11076], 00:33:20.322 | 99.99th=[11600] 00:33:20.322 bw ( KiB/s): min=40200, max=42112, per=99.92%, avg=41366.00, stdev=828.80, samples=4 00:33:20.322 iops : min=10050, max=10528, avg=10341.50, stdev=207.20, samples=4 00:33:20.322 write: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(81.2MiB/2005msec); 0 zone resets 00:33:20.322 slat (nsec): min=2079, max=104890, avg=2279.32, stdev=768.67 00:33:20.322 clat (usec): min=1036, max=10459, avg=5449.67, stdev=440.14 00:33:20.322 lat (usec): min=1043, max=10462, avg=5451.95, stdev=440.12 00:33:20.322 clat percentiles (usec): 00:33:20.322 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5080], 00:33:20.322 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:33:20.322 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:33:20.322 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 8291], 99.95th=[ 8979], 00:33:20.322 | 99.99th=[ 9765] 00:33:20.322 bw ( KiB/s): min=40784, max=41768, per=99.97%, avg=41434.00, stdev=451.95, samples=4 00:33:20.322 iops : min=10196, max=10442, avg=10358.50, stdev=112.99, samples=4 00:33:20.322 lat (msec) : 2=0.02%, 4=0.12%, 10=99.81%, 20=0.05% 00:33:20.322 cpu : usr=72.80%, sys=26.15%, ctx=41, majf=0, minf=29 00:33:20.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:20.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:20.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:20.322 issued rwts: total=20751,20776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:20.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:20.322 00:33:20.322 Run status group 0 (all jobs): 00:33:20.322 READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=81.1MiB (85.0MB), run=2005-2005msec 00:33:20.322 WRITE: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=81.2MiB (85.1MB), run=2005-2005msec 00:33:20.322 11:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:20.584 11:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=e961073f-5c89-4c73-9914-510c63c72a6a 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb e961073f-5c89-4c73-9914-510c63c72a6a 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=e961073f-5c89-4c73-9914-510c63c72a6a 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:33:21.525 { 00:33:21.525 "uuid": "0ea1fca2-c2ac-4160-926d-a6f6ceafb5df", 00:33:21.525 "name": "lvs_0", 00:33:21.525 "base_bdev": "Nvme0n1", 00:33:21.525 "total_data_clusters": 1787, 00:33:21.525 "free_clusters": 0, 00:33:21.525 "block_size": 512, 00:33:21.525 "cluster_size": 1073741824 00:33:21.525 }, 00:33:21.525 { 00:33:21.525 "uuid": "e961073f-5c89-4c73-9914-510c63c72a6a", 00:33:21.525 "name": "lvs_n_0", 00:33:21.525 "base_bdev": "decca5b4-bae5-4c28-a158-1d559618a48d", 00:33:21.525 "total_data_clusters": 457025, 00:33:21.525 "free_clusters": 457025, 00:33:21.525 "block_size": 512, 00:33:21.525 "cluster_size": 4194304 00:33:21.525 } 00:33:21.525 ]' 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e961073f-5c89-4c73-9914-510c63c72a6a") .free_clusters' 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:33:21.525 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="e961073f-5c89-4c73-9914-510c63c72a6a") .cluster_size' 00:33:21.795 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:33:21.796 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:33:21.796 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:33:21.796 1828100 00:33:21.796 11:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:22.739 d5c391ff-635a-4c16-a7a2-e9c207eceb22 00:33:22.739 11:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:22.999 11:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:23.000 11:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:23.260 11:13:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:23.520 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:23.520 fio-3.35 00:33:23.520 Starting 1 thread 00:33:26.167 00:33:26.167 test: (groupid=0, jobs=1): err= 0: pid=2048272: Wed Oct 9 11:13:45 2024 00:33:26.167 read: IOPS=9221, BW=36.0MiB/s (37.8MB/s)(72.2MiB/2005msec) 00:33:26.167 slat (usec): min=2, max=122, avg= 2.20, stdev= 1.23 00:33:26.168 clat (usec): min=2111, max=12545, avg=7681.03, stdev=594.89 00:33:26.168 lat (usec): min=2129, max=12547, avg=7683.23, stdev=594.83 00:33:26.168 clat percentiles (usec): 00:33:26.168 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6980], 20.00th=[ 7242], 00:33:26.168 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:33:26.168 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:33:26.168 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11338], 99.95th=[12256], 00:33:26.168 | 99.99th=[12518] 00:33:26.168 bw ( KiB/s): min=35808, max=37464, per=99.86%, avg=36834.00, stdev=721.28, samples=4 00:33:26.168 iops : min= 8952, max= 9366, avg=9208.50, stdev=180.32, samples=4 00:33:26.168 write: IOPS=9226, BW=36.0MiB/s (37.8MB/s)(72.3MiB/2005msec); 0 zone resets 00:33:26.168 slat (nsec): min=2089, max=109174, avg=2272.88, stdev=846.91 00:33:26.168 clat (usec): min=1068, max=11353, avg=6120.98, stdev=507.16 00:33:26.168 lat (usec): min=1075, max=11355, avg=6123.25, stdev=507.13 00:33:26.168 clat percentiles (usec): 00:33:26.168 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:33:26.168 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:33:26.168 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:33:26.168 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 8979], 99.95th=[ 9896], 00:33:26.168 | 99.99th=[11338] 00:33:26.168 bw ( KiB/s): min=36688, max=37184, per=99.95%, avg=36888.00, stdev=226.46, samples=4 00:33:26.168 iops : min= 9172, max= 9296, avg=9222.00, stdev=56.62, samples=4 00:33:26.168 lat (msec) : 2=0.01%, 4=0.10%, 10=99.80%, 20=0.09% 00:33:26.168 cpu : usr=73.95%, sys=25.10%, ctx=36, majf=0, minf=29 00:33:26.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:26.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:26.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:26.168 issued rwts: total=18489,18499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:26.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:26.168 00:33:26.168 Run status group 0 (all jobs): 00:33:26.168 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.2MiB (75.7MB), run=2005-2005msec 00:33:26.168 WRITE: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=72.3MiB (75.8MB), run=2005-2005msec 00:33:26.168 11:13:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:26.168 11:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:26.168 11:13:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:28.711 11:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:28.711 11:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:29.295 11:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:29.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:31.363 rmmod nvme_tcp 00:33:31.363 rmmod nvme_fabrics 00:33:31.363 rmmod nvme_keyring 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2044520 ']' 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2044520 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2044520 ']' 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2044520 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2044520 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2044520' 00:33:31.363 killing process with pid 2044520 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2044520 00:33:31.363 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2044520 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:31.625 11:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:34.167 00:33:34.167 real 0m34.162s 00:33:34.167 user 2m43.010s 00:33:34.167 sys 0m10.005s 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.167 ************************************ 00:33:34.167 END TEST nvmf_fio_host 00:33:34.167 ************************************ 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:34.167 ************************************ 00:33:34.167 START TEST nvmf_failover 00:33:34.167 ************************************ 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:34.167 * Looking for test storage... 00:33:34.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.167 --rc genhtml_branch_coverage=1 00:33:34.167 --rc genhtml_function_coverage=1 00:33:34.167 --rc genhtml_legend=1 00:33:34.167 --rc geninfo_all_blocks=1 00:33:34.167 --rc geninfo_unexecuted_blocks=1 00:33:34.167 00:33:34.167 ' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.167 --rc genhtml_branch_coverage=1 00:33:34.167 --rc genhtml_function_coverage=1 00:33:34.167 --rc genhtml_legend=1 00:33:34.167 --rc geninfo_all_blocks=1 00:33:34.167 --rc geninfo_unexecuted_blocks=1 00:33:34.167 00:33:34.167 ' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.167 --rc genhtml_branch_coverage=1 00:33:34.167 --rc genhtml_function_coverage=1 00:33:34.167 --rc genhtml_legend=1 00:33:34.167 --rc geninfo_all_blocks=1 00:33:34.167 --rc geninfo_unexecuted_blocks=1 00:33:34.167 00:33:34.167 ' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:34.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.167 --rc genhtml_branch_coverage=1 00:33:34.167 --rc genhtml_function_coverage=1 00:33:34.167 --rc genhtml_legend=1 00:33:34.167 --rc geninfo_all_blocks=1 00:33:34.167 --rc geninfo_unexecuted_blocks=1 00:33:34.167 00:33:34.167 ' 00:33:34.167 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:34.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:34.168 11:13:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:42.324 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:42.324 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:42.324 Found net devices under 0000:31:00.0: cvl_0_0 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:42.324 Found net devices under 0000:31:00.1: cvl_0_1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:42.324 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:42.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:42.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:33:42.325 00:33:42.325 --- 10.0.0.2 ping statistics --- 00:33:42.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.325 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:42.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:42.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:33:42.325 00:33:42.325 --- 10.0.0.1 ping statistics --- 00:33:42.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:42.325 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2053985 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2053985 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2053985 ']' 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.325 11:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.325 [2024-10-09 11:14:01.548557] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:33:42.325 [2024-10-09 11:14:01.548623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.325 [2024-10-09 11:14:01.690368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:42.325 [2024-10-09 11:14:01.738975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:42.325 [2024-10-09 11:14:01.758442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.325 [2024-10-09 11:14:01.758497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.325 [2024-10-09 11:14:01.758506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.325 [2024-10-09 11:14:01.758513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.325 [2024-10-09 11:14:01.758519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.325 [2024-10-09 11:14:01.759937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:42.325 [2024-10-09 11:14:01.760094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.325 [2024-10-09 11:14:01.760095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:42.586 [2024-10-09 11:14:02.550879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.586 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:42.846 Malloc0 00:33:42.846 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:43.107 11:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.368 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.368 [2024-10-09 11:14:03.288585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.368 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:43.628 [2024-10-09 11:14:03.464639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:43.628 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:43.889 [2024-10-09 11:14:03.648766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2054439 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2054439 /var/tmp/bdevperf.sock 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2054439 ']' 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.889 11:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.831 11:14:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:44.831 11:14:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:33:44.831 11:14:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:44.831 NVMe0n1 00:33:44.831 11:14:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:45.092 00:33:45.092 11:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:45.092 11:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2054690 00:33:45.092 11:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:46.494 11:14:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.494 11:14:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:49.797 11:14:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:49.797 00:33:49.797 11:14:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:50.058 11:14:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:53.358 11:14:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.358 [2024-10-09 11:14:12.981160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.358 11:14:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:54.299 11:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:54.299 [2024-10-09 11:14:14.168570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 [2024-10-09 11:14:14.168634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb61ea0 is same with the state(6) to be set 00:33:54.299 11:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2054690 00:34:00.890 { 00:34:00.890 "results": [ 00:34:00.890 { 00:34:00.890 "job": "NVMe0n1", 00:34:00.890 "core_mask": "0x1", 00:34:00.890 "workload": "verify", 00:34:00.890 "status": "finished", 00:34:00.890 "verify_range": { 00:34:00.890 "start": 0, 00:34:00.890 "length": 16384 00:34:00.890 }, 00:34:00.890 "queue_depth": 128, 00:34:00.890 "io_size": 4096, 00:34:00.890 "runtime": 15.012146, 00:34:00.890 "iops": 11067.238488088246, 00:34:00.890 "mibps": 43.23140034409471, 00:34:00.890 "io_failed": 10597, 00:34:00.890 "io_timeout": 0, 00:34:00.890 "avg_latency_us": 10844.791166866251, 00:34:00.890 "min_latency_us": 513.1974607417307, 00:34:00.890 "max_latency_us": 15765.425993985968 00:34:00.890 } 00:34:00.890 ], 00:34:00.890 "core_count": 1 00:34:00.890 } 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2054439 ']' 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2054439' 00:34:00.890 killing process with pid 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2054439 00:34:00.890 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:00.890 [2024-10-09 11:14:03.726609] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:34:00.890 [2024-10-09 11:14:03.726668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054439 ] 00:34:00.890 [2024-10-09 11:14:03.856530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:00.890 [2024-10-09 11:14:03.888581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.890 [2024-10-09 11:14:03.906675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.890 Running I/O for 15 seconds... 00:34:00.890 11277.00 IOPS, 44.05 MiB/s [2024-10-09T09:14:20.892Z] [2024-10-09 11:14:06.227469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.890 [2024-10-09 11:14:06.227968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.890 [2024-10-09 11:14:06.227977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.227984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.227994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.891 [2024-10-09 11:14:06.228505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.891 [2024-10-09 11:14:06.228674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.891 [2024-10-09 11:14:06.228683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.228990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.228997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.892 [2024-10-09 11:14:06.229379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.892 [2024-10-09 11:14:06.229388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.893 [2024-10-09 11:14:06.229485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.893 [2024-10-09 11:14:06.229809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.893 [2024-10-09 11:14:06.229815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:34:00.893 [2024-10-09 11:14:06.229822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229858] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x212c2d0 was disconnected and freed. reset controller. 00:34:00.893 [2024-10-09 11:14:06.229868] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:00.893 [2024-10-09 11:14:06.229889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.893 [2024-10-09 11:14:06.229899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.893 [2024-10-09 11:14:06.229916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.893 [2024-10-09 11:14:06.229931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.893 [2024-10-09 11:14:06.229946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:06.229962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.893 [2024-10-09 11:14:06.229999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210bb20 (9): Bad file descriptor 00:34:00.893 [2024-10-09 11:14:06.233556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.893 [2024-10-09 11:14:06.278886] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:00.893 11059.50 IOPS, 43.20 MiB/s [2024-10-09T09:14:20.895Z] 11062.67 IOPS, 43.21 MiB/s [2024-10-09T09:14:20.895Z] 11104.75 IOPS, 43.38 MiB/s [2024-10-09T09:14:20.895Z] [2024-10-09 11:14:09.793987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.893 [2024-10-09 11:14:09.794192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.893 [2024-10-09 11:14:09.794203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.894 [2024-10-09 11:14:09.794826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.894 [2024-10-09 11:14:09.794834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.794986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.794995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.895 [2024-10-09 11:14:09.795483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.895 [2024-10-09 11:14:09.795500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.895 [2024-10-09 11:14:09.795516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.895 [2024-10-09 11:14:09.795535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.895 [2024-10-09 11:14:09.795544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.795991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.896 [2024-10-09 11:14:09.796125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.896 [2024-10-09 11:14:09.796226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.896 [2024-10-09 11:14:09.796248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.896 [2024-10-09 11:14:09.796255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.896 [2024-10-09 11:14:09.796262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30664 len:8 PRP1 0x0 PRP2 0x0 00:34:00.897 [2024-10-09 11:14:09.796269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:09.796307] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x212e2a0 was disconnected and freed. reset controller. 00:34:00.897 [2024-10-09 11:14:09.796318] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:00.897 [2024-10-09 11:14:09.796338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.897 [2024-10-09 11:14:09.796346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:09.796354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.897 [2024-10-09 11:14:09.796362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:09.796370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.897 [2024-10-09 11:14:09.796378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:09.796386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.897 [2024-10-09 11:14:09.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:09.796403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.897 [2024-10-09 11:14:09.796427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210bb20 (9): Bad file descriptor 00:34:00.897 [2024-10-09 11:14:09.800004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.897 [2024-10-09 11:14:09.879286] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:00.897 10980.60 IOPS, 42.89 MiB/s [2024-10-09T09:14:20.899Z] 11016.50 IOPS, 43.03 MiB/s [2024-10-09T09:14:20.899Z] 11018.00 IOPS, 43.04 MiB/s [2024-10-09T09:14:20.899Z] 11044.50 IOPS, 43.14 MiB/s [2024-10-09T09:14:20.899Z] 11050.00 IOPS, 43.16 MiB/s [2024-10-09T09:14:20.899Z] [2024-10-09 11:14:14.168833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.168997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.897 [2024-10-09 11:14:14.169367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.897 [2024-10-09 11:14:14.169448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.897 [2024-10-09 11:14:14.169456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.169993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.898 [2024-10-09 11:14:14.170114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.898 [2024-10-09 11:14:14.170122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.899 [2024-10-09 11:14:14.170265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.899 [2024-10-09 11:14:14.170856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.899 [2024-10-09 11:14:14.170864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.170966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:00.900 [2024-10-09 11:14:14.170985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.170995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.900 [2024-10-09 11:14:14.171107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212ecc0 is same with the state(6) to be set 00:34:00.900 [2024-10-09 11:14:14.171125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:00.900 [2024-10-09 11:14:14.171131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:00.900 [2024-10-09 11:14:14.171138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45200 len:8 PRP1 0x0 PRP2 0x0 00:34:00.900 [2024-10-09 11:14:14.171146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171184] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x212ecc0 was disconnected and freed. reset controller. 00:34:00.900 [2024-10-09 11:14:14.171194] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:00.900 [2024-10-09 11:14:14.171215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.900 [2024-10-09 11:14:14.171229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.900 [2024-10-09 11:14:14.171246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.900 [2024-10-09 11:14:14.171262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:00.900 [2024-10-09 11:14:14.171278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:00.900 [2024-10-09 11:14:14.171285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:00.900 [2024-10-09 11:14:14.174888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:00.900 [2024-10-09 11:14:14.174913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210bb20 (9): Bad file descriptor 00:34:00.900 [2024-10-09 11:14:14.346101] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:00.900 10956.90 IOPS, 42.80 MiB/s [2024-10-09T09:14:20.902Z] 10984.82 IOPS, 42.91 MiB/s [2024-10-09T09:14:20.902Z] 11021.17 IOPS, 43.05 MiB/s [2024-10-09T09:14:20.902Z] 11034.38 IOPS, 43.10 MiB/s [2024-10-09T09:14:20.902Z] 11053.29 IOPS, 43.18 MiB/s 00:34:00.900 Latency(us) 00:34:00.900 [2024-10-09T09:14:20.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.900 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:00.900 Verification LBA range: start 0x0 length 0x4000 00:34:00.900 NVMe0n1 : 15.01 11067.24 43.23 705.90 0.00 10844.79 513.20 15765.43 00:34:00.900 [2024-10-09T09:14:20.902Z] =================================================================================================================== 00:34:00.900 [2024-10-09T09:14:20.902Z] Total : 11067.24 43.23 705.90 0.00 10844.79 513.20 15765.43 00:34:00.900 Received shutdown signal, test time was about 15.000000 seconds 00:34:00.900 00:34:00.900 Latency(us) 00:34:00.900 [2024-10-09T09:14:20.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.900 [2024-10-09T09:14:20.902Z] =================================================================================================================== 00:34:00.900 [2024-10-09T09:14:20.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2057691 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2057691 /var/tmp/bdevperf.sock 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2057691 ']' 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:00.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.900 11:14:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:01.471 11:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.471 11:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:34:01.471 11:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:01.471 [2024-10-09 11:14:21.383602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:01.471 11:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:01.732 [2024-10-09 11:14:21.567653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:01.732 11:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:01.993 NVMe0n1 00:34:02.253 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:02.513 00:34:02.514 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:02.774 00:34:02.774 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:02.774 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:02.774 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:03.035 11:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:06.335 11:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:06.335 11:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:06.335 11:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2058713 00:34:06.335 11:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:06.335 11:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2058713 00:34:07.275 { 00:34:07.275 "results": [ 00:34:07.275 { 00:34:07.275 "job": "NVMe0n1", 00:34:07.275 "core_mask": "0x1", 00:34:07.275 "workload": "verify", 00:34:07.275 "status": "finished", 00:34:07.275 "verify_range": { 00:34:07.275 "start": 0, 00:34:07.275 "length": 16384 00:34:07.275 }, 00:34:07.275 "queue_depth": 128, 00:34:07.275 "io_size": 4096, 00:34:07.275 "runtime": 1.007108, 00:34:07.275 "iops": 11283.794786656446, 00:34:07.275 "mibps": 44.077323385376744, 00:34:07.275 "io_failed": 0, 00:34:07.275 "io_timeout": 0, 00:34:07.275 "avg_latency_us": 11288.325788449478, 00:34:07.275 "min_latency_us": 2148.5867023053793, 00:34:07.275 "max_latency_us": 10017.614433678584 00:34:07.275 } 00:34:07.275 ], 00:34:07.275 "core_count": 1 00:34:07.275 } 00:34:07.275 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:07.275 [2024-10-09 11:14:20.425459] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:34:07.275 [2024-10-09 11:14:20.425524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057691 ] 00:34:07.275 [2024-10-09 11:14:20.555724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:07.275 [2024-10-09 11:14:20.587275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.275 [2024-10-09 11:14:20.603980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.275 [2024-10-09 11:14:22.915334] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:07.275 [2024-10-09 11:14:22.915379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:07.275 [2024-10-09 11:14:22.915391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.275 [2024-10-09 11:14:22.915401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:07.275 [2024-10-09 11:14:22.915409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.275 [2024-10-09 11:14:22.915418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:07.275 [2024-10-09 11:14:22.915425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.275 [2024-10-09 11:14:22.915433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:07.275 [2024-10-09 11:14:22.915440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:07.275 [2024-10-09 11:14:22.915448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:07.275 [2024-10-09 11:14:22.915481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:07.275 [2024-10-09 11:14:22.915497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d43b20 (9): Bad file descriptor 00:34:07.275 [2024-10-09 11:14:22.936225] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:07.275 Running I/O for 1 seconds... 00:34:07.275 11236.00 IOPS, 43.89 MiB/s 00:34:07.275 Latency(us) 00:34:07.275 [2024-10-09T09:14:27.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.275 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:07.275 Verification LBA range: start 0x0 length 0x4000 00:34:07.275 NVMe0n1 : 1.01 11283.79 44.08 0.00 0.00 11288.33 2148.59 10017.61 00:34:07.275 [2024-10-09T09:14:27.277Z] =================================================================================================================== 00:34:07.276 [2024-10-09T09:14:27.278Z] Total : 11283.79 44.08 0.00 0.00 11288.33 2148.59 10017.61 00:34:07.276 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:07.276 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:07.536 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:07.797 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:07.797 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:08.056 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:08.056 11:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:11.356 11:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:11.356 11:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2057691 ']' 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2057691' 00:34:11.356 killing process with pid 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2057691 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:11.356 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:11.617 rmmod nvme_tcp 00:34:11.617 rmmod nvme_fabrics 00:34:11.617 rmmod nvme_keyring 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2053985 ']' 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2053985 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2053985 ']' 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2053985 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:11.617 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2053985 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2053985' 00:34:11.878 killing process with pid 2053985 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2053985 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2053985 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.878 11:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.425 11:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:14.426 00:34:14.426 real 0m40.197s 00:34:14.426 user 2m2.843s 00:34:14.426 sys 0m8.503s 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:14.426 ************************************ 00:34:14.426 END TEST nvmf_failover 00:34:14.426 ************************************ 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.426 ************************************ 00:34:14.426 START TEST nvmf_host_discovery 00:34:14.426 ************************************ 00:34:14.426 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:14.426 * Looking for test storage... 00:34:14.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:14.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.426 --rc genhtml_branch_coverage=1 00:34:14.426 --rc genhtml_function_coverage=1 00:34:14.426 --rc genhtml_legend=1 00:34:14.426 --rc geninfo_all_blocks=1 00:34:14.426 --rc geninfo_unexecuted_blocks=1 00:34:14.426 00:34:14.426 ' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:14.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.426 --rc genhtml_branch_coverage=1 00:34:14.426 --rc genhtml_function_coverage=1 00:34:14.426 --rc genhtml_legend=1 00:34:14.426 --rc geninfo_all_blocks=1 00:34:14.426 --rc geninfo_unexecuted_blocks=1 00:34:14.426 00:34:14.426 ' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:14.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.426 --rc genhtml_branch_coverage=1 00:34:14.426 --rc genhtml_function_coverage=1 00:34:14.426 --rc genhtml_legend=1 00:34:14.426 --rc geninfo_all_blocks=1 00:34:14.426 --rc geninfo_unexecuted_blocks=1 00:34:14.426 00:34:14.426 ' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:14.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:14.426 --rc genhtml_branch_coverage=1 00:34:14.426 --rc genhtml_function_coverage=1 00:34:14.426 --rc genhtml_legend=1 00:34:14.426 --rc geninfo_all_blocks=1 00:34:14.426 --rc geninfo_unexecuted_blocks=1 00:34:14.426 00:34:14.426 ' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.426 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:14.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:14.427 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:22.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.568 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:22.569 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:22.569 Found net devices under 0000:31:00.0: cvl_0_0 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:22.569 Found net devices under 0000:31:00.1: cvl_0_1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:34:22.569 00:34:22.569 --- 10.0.0.2 ping statistics --- 00:34:22.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.569 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:22.569 00:34:22.569 --- 10.0.0.1 ping statistics --- 00:34:22.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.569 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2064099 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2064099 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2064099 ']' 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.569 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.569 [2024-10-09 11:14:41.656936] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:34:22.569 [2024-10-09 11:14:41.657005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.569 [2024-10-09 11:14:41.798341] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:22.569 [2024-10-09 11:14:41.848313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.569 [2024-10-09 11:14:41.874213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.569 [2024-10-09 11:14:41.874259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.569 [2024-10-09 11:14:41.874268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.569 [2024-10-09 11:14:41.874281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.569 [2024-10-09 11:14:41.874288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.569 [2024-10-09 11:14:41.875038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.569 [2024-10-09 11:14:42.489778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.569 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.570 [2024-10-09 11:14:42.501914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.570 null0 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.570 null1 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2064142 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2064142 /tmp/host.sock 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2064142 ']' 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:22.570 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.570 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.830 [2024-10-09 11:14:42.595424] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:34:22.830 [2024-10-09 11:14:42.595478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064142 ] 00:34:22.830 [2024-10-09 11:14:42.725588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:22.830 [2024-10-09 11:14:42.756429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.830 [2024-10-09 11:14:42.774811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.400 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.661 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.921 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.921 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 [2024-10-09 11:14:43.706234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:34:23.922 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:24.492 [2024-10-09 11:14:44.393596] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:24.492 [2024-10-09 11:14:44.393617] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:24.492 [2024-10-09 11:14:44.393631] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:24.492 [2024-10-09 11:14:44.479704] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:24.752 [2024-10-09 11:14:44.585790] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:24.752 [2024-10-09 11:14:44.585812] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.013 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:25.273 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.274 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:25.534 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:26.487 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:26.487 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.488 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.748 [2024-10-09 11:14:46.507272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:26.748 [2024-10-09 11:14:46.507857] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:26.748 [2024-10-09 11:14:46.507885] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:26.748 [2024-10-09 11:14:46.593928] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:26.748 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:34:27.009 [2024-10-09 11:14:46.900815] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:27.009 [2024-10-09 11:14:46.900834] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:27.009 [2024-10-09 11:14:46.900840] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:27.950 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.951 [2024-10-09 11:14:47.776191] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:27.951 [2024-10-09 11:14:47.776213] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:27.951 [2024-10-09 11:14:47.783864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.951 [2024-10-09 11:14:47.783884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.951 [2024-10-09 11:14:47.783894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.951 [2024-10-09 11:14:47.783902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.951 [2024-10-09 11:14:47.783911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.951 [2024-10-09 11:14:47.783918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.951 [2024-10-09 11:14:47.783927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:27.951 [2024-10-09 11:14:47.783935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:27.951 [2024-10-09 11:14:47.783943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:27.951 [2024-10-09 11:14:47.793854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.951 [2024-10-09 11:14:47.803869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 [2024-10-09 11:14:47.804196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.804212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.804220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.804233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.804250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.804258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.804267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.804279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 [2024-10-09 11:14:47.813902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 [2024-10-09 11:14:47.814211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.814224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.814232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.814244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.814260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.814267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.814274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.814285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 [2024-10-09 11:14:47.823931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 [2024-10-09 11:14:47.824281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.824295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.824303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.824315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.824332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.824339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.824347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.824358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:27.951 [2024-10-09 11:14:47.833965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:27.951 [2024-10-09 11:14:47.834325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.834338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.834346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.834357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.834373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.834380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.834388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.834399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:27.951 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.951 [2024-10-09 11:14:47.843995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 [2024-10-09 11:14:47.844301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.844314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.844322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.844333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.844344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.844350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.844357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.844368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 [2024-10-09 11:14:47.854030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:27.951 [2024-10-09 11:14:47.854150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.951 [2024-10-09 11:14:47.854161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c3590 with addr=10.0.0.2, port=4420 00:34:27.951 [2024-10-09 11:14:47.854169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c3590 is same with the state(6) to be set 00:34:27.951 [2024-10-09 11:14:47.854180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c3590 (9): Bad file descriptor 00:34:27.951 [2024-10-09 11:14:47.854191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:27.951 [2024-10-09 11:14:47.854198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:27.951 [2024-10-09 11:14:47.854205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:27.951 [2024-10-09 11:14:47.854216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:27.951 [2024-10-09 11:14:47.861819] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:27.951 [2024-10-09 11:14:47.861837] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.952 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.213 11:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:28.213 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.214 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.596 [2024-10-09 11:14:49.215349] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:29.596 [2024-10-09 11:14:49.215367] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:29.596 [2024-10-09 11:14:49.215380] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:29.596 [2024-10-09 11:14:49.303457] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:29.857 [2024-10-09 11:14:49.611806] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:29.857 [2024-10-09 11:14:49.611837] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.857 request: 00:34:29.857 { 00:34:29.857 "name": "nvme", 00:34:29.857 "trtype": "tcp", 00:34:29.857 "traddr": "10.0.0.2", 00:34:29.857 "adrfam": "ipv4", 00:34:29.857 "trsvcid": "8009", 00:34:29.857 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:29.857 "wait_for_attach": true, 00:34:29.857 "method": "bdev_nvme_start_discovery", 00:34:29.857 "req_id": 1 00:34:29.857 } 00:34:29.857 Got JSON-RPC error response 00:34:29.857 response: 00:34:29.857 { 00:34:29.857 "code": -17, 00:34:29.857 "message": "File exists" 00:34:29.857 } 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.857 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.858 request: 00:34:29.858 { 00:34:29.858 "name": "nvme_second", 00:34:29.858 "trtype": "tcp", 00:34:29.858 "traddr": "10.0.0.2", 00:34:29.858 "adrfam": "ipv4", 00:34:29.858 "trsvcid": "8009", 00:34:29.858 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:29.858 "wait_for_attach": true, 00:34:29.858 "method": "bdev_nvme_start_discovery", 00:34:29.858 "req_id": 1 00:34:29.858 } 00:34:29.858 Got JSON-RPC error response 00:34:29.858 response: 00:34:29.858 { 00:34:29.858 "code": -17, 00:34:29.858 "message": "File exists" 00:34:29.858 } 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:29.858 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.123 11:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.148 [2024-10-09 11:14:50.864380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.148 [2024-10-09 11:14:50.864413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160a750 with addr=10.0.0.2, port=8010 00:34:31.148 [2024-10-09 11:14:50.864428] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:31.148 [2024-10-09 11:14:50.864436] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:31.148 [2024-10-09 11:14:50.864444] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:32.089 [2024-10-09 11:14:51.864372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.089 [2024-10-09 11:14:51.864396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x160a750 with addr=10.0.0.2, port=8010 00:34:32.089 [2024-10-09 11:14:51.864408] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:32.089 [2024-10-09 11:14:51.864416] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:32.089 [2024-10-09 11:14:51.864422] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:33.030 [2024-10-09 11:14:52.864031] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:33.030 request: 00:34:33.030 { 00:34:33.030 "name": "nvme_second", 00:34:33.030 "trtype": "tcp", 00:34:33.030 "traddr": "10.0.0.2", 00:34:33.030 "adrfam": "ipv4", 00:34:33.030 "trsvcid": "8010", 00:34:33.030 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:33.030 "wait_for_attach": false, 00:34:33.030 "attach_timeout_ms": 3000, 00:34:33.030 "method": "bdev_nvme_start_discovery", 00:34:33.030 "req_id": 1 00:34:33.030 } 00:34:33.030 Got JSON-RPC error response 00:34:33.030 response: 00:34:33.030 { 00:34:33.030 "code": -110, 00:34:33.030 "message": "Connection timed out" 00:34:33.030 } 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2064142 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.030 rmmod nvme_tcp 00:34:33.030 rmmod nvme_fabrics 00:34:33.030 rmmod nvme_keyring 00:34:33.030 11:14:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2064099 ']' 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2064099 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2064099 ']' 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2064099 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.030 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2064099 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2064099' 00:34:33.291 killing process with pid 2064099 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2064099 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2064099 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.291 11:14:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.834 00:34:35.834 real 0m21.337s 00:34:35.834 user 0m25.503s 00:34:35.834 sys 0m7.135s 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 ************************************ 00:34:35.834 END TEST nvmf_host_discovery 00:34:35.834 ************************************ 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.834 ************************************ 00:34:35.834 START TEST nvmf_host_multipath_status 00:34:35.834 ************************************ 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:35.834 * Looking for test storage... 00:34:35.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.834 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:35.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.835 --rc genhtml_branch_coverage=1 00:34:35.835 --rc genhtml_function_coverage=1 00:34:35.835 --rc genhtml_legend=1 00:34:35.835 --rc geninfo_all_blocks=1 00:34:35.835 --rc geninfo_unexecuted_blocks=1 00:34:35.835 00:34:35.835 ' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:35.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.835 --rc genhtml_branch_coverage=1 00:34:35.835 --rc genhtml_function_coverage=1 00:34:35.835 --rc genhtml_legend=1 00:34:35.835 --rc geninfo_all_blocks=1 00:34:35.835 --rc geninfo_unexecuted_blocks=1 00:34:35.835 00:34:35.835 ' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:35.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.835 --rc genhtml_branch_coverage=1 00:34:35.835 --rc genhtml_function_coverage=1 00:34:35.835 --rc genhtml_legend=1 00:34:35.835 --rc geninfo_all_blocks=1 00:34:35.835 --rc geninfo_unexecuted_blocks=1 00:34:35.835 00:34:35.835 ' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:35.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.835 --rc genhtml_branch_coverage=1 00:34:35.835 --rc genhtml_function_coverage=1 00:34:35.835 --rc genhtml_legend=1 00:34:35.835 --rc geninfo_all_blocks=1 00:34:35.835 --rc geninfo_unexecuted_blocks=1 00:34:35.835 00:34:35.835 ' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:35.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.835 11:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:42.419 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:42.420 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:42.420 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:42.420 Found net devices under 0000:31:00.0: cvl_0_0 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:42.420 Found net devices under 0000:31:00.1: cvl_0_1 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:42.420 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:42.681 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:42.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:42.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:34:42.942 00:34:42.942 --- 10.0.0.2 ping statistics --- 00:34:42.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.942 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:42.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:42.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:34:42.942 00:34:42.942 --- 10.0.0.1 ping statistics --- 00:34:42.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:42.942 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2070692 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2070692 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2070692 ']' 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.942 11:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:42.942 [2024-10-09 11:15:02.805600] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:34:42.942 [2024-10-09 11:15:02.805667] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.202 [2024-10-09 11:15:02.947032] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:43.202 [2024-10-09 11:15:02.979578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.202 [2024-10-09 11:15:03.001884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.202 [2024-10-09 11:15:03.001924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.202 [2024-10-09 11:15:03.001933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.202 [2024-10-09 11:15:03.001940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.202 [2024-10-09 11:15:03.001946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.202 [2024-10-09 11:15:03.003427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.202 [2024-10-09 11:15:03.003428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.771 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2070692 00:34:43.772 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:44.031 [2024-10-09 11:15:03.812205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.032 11:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:44.032 Malloc0 00:34:44.032 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:44.292 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:44.552 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.552 [2024-10-09 11:15:04.487557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.552 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:44.813 [2024-10-09 11:15:04.655602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2071122 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2071122 /var/tmp/bdevperf.sock 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2071122 ']' 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:44.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:44.813 11:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:45.752 11:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:45.752 11:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:34:45.752 11:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:45.752 11:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:46.023 Nvme0n1 00:34:46.023 11:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:46.596 Nvme0n1 00:34:46.596 11:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:46.596 11:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:48.505 11:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:48.505 11:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:48.765 11:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:49.036 11:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:49.981 11:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:49.981 11:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:49.981 11:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.981 11:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.241 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.241 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:50.241 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.241 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.501 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.502 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.762 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.762 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.762 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.762 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.023 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.023 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:51.023 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.023 11:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.023 11:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.023 11:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:51.023 11:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:51.283 11:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:51.543 11:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:52.483 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:52.483 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:52.483 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.483 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.743 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.004 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.004 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.004 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.004 11:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.264 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.264 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.264 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.264 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:53.524 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:53.784 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:53.784 11:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.167 11:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.167 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.167 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.167 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.167 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.427 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.427 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.427 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.427 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.687 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.687 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.687 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.687 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.947 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:55.948 11:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:56.207 11:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:56.467 11:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:57.407 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:57.407 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:57.407 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.407 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.668 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.928 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.928 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.928 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.928 11:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.188 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.188 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.188 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.188 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:58.448 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:58.709 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:58.970 11:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.910 11:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.170 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.170 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.170 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.170 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.431 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.431 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.431 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.431 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.693 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.955 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.955 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:00.955 11:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:01.215 11:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:01.215 11:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.600 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.860 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.860 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.860 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.861 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.121 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.121 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:03.121 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.121 11:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.121 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.121 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.121 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.121 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.382 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.382 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:03.643 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:03.643 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:03.905 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:03.905 11:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:04.845 11:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:04.845 11:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.106 11:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.106 11:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.106 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.106 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:05.106 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.106 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.366 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.366 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.366 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.366 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.627 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:05.888 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.888 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:05.888 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.888 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.147 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.147 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:06.147 11:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:06.147 11:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:06.407 11:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:07.347 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:07.347 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:07.347 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.347 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:07.607 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:07.607 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:07.607 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.607 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.866 11:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.126 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.126 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.126 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.126 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.385 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.385 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:08.385 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.385 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.644 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.644 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:08.644 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:08.644 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:08.903 11:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:09.842 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:09.842 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:09.842 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.842 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.102 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.102 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.102 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.102 11:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.363 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:10.622 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.622 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:10.622 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.622 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:10.883 11:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:11.142 11:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:11.403 11:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:12.343 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:12.343 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:12.343 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.343 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.602 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:12.861 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.861 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:12.861 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.861 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:13.121 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.121 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:13.121 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.121 11:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:13.121 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.121 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:13.121 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.121 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2071122 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2071122 ']' 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2071122 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2071122 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2071122' 00:35:13.381 killing process with pid 2071122 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2071122 00:35:13.381 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2071122 00:35:13.381 { 00:35:13.381 "results": [ 00:35:13.381 { 00:35:13.381 "job": "Nvme0n1", 00:35:13.381 "core_mask": "0x4", 00:35:13.381 "workload": "verify", 00:35:13.381 "status": "terminated", 00:35:13.381 "verify_range": { 00:35:13.381 "start": 0, 00:35:13.381 "length": 16384 00:35:13.381 }, 00:35:13.381 "queue_depth": 128, 00:35:13.381 "io_size": 4096, 00:35:13.381 "runtime": 26.808438, 00:35:13.381 "iops": 10792.64670325067, 00:35:13.381 "mibps": 42.15877618457293, 00:35:13.381 "io_failed": 0, 00:35:13.381 "io_timeout": 0, 00:35:13.381 "avg_latency_us": 11841.043920603948, 00:35:13.381 "min_latency_us": 251.46675576344805, 00:35:13.381 "max_latency_us": 3012948.0788506516 00:35:13.381 } 00:35:13.381 ], 00:35:13.381 "core_count": 1 00:35:13.381 } 00:35:13.643 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2071122 00:35:13.643 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:13.643 [2024-10-09 11:15:04.702425] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:35:13.643 [2024-10-09 11:15:04.702491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071122 ] 00:35:13.643 [2024-10-09 11:15:04.832653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:13.643 [2024-10-09 11:15:04.855501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.643 [2024-10-09 11:15:04.871615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.643 Running I/O for 90 seconds... 00:35:13.644 9542.00 IOPS, 37.27 MiB/s [2024-10-09T09:15:33.646Z] 9552.00 IOPS, 37.31 MiB/s [2024-10-09T09:15:33.646Z] 9593.00 IOPS, 37.47 MiB/s [2024-10-09T09:15:33.646Z] 9606.50 IOPS, 37.53 MiB/s [2024-10-09T09:15:33.646Z] 9829.40 IOPS, 38.40 MiB/s [2024-10-09T09:15:33.646Z] 10324.17 IOPS, 40.33 MiB/s [2024-10-09T09:15:33.646Z] 10721.00 IOPS, 41.88 MiB/s [2024-10-09T09:15:33.646Z] 10692.62 IOPS, 41.77 MiB/s [2024-10-09T09:15:33.646Z] 10570.22 IOPS, 41.29 MiB/s [2024-10-09T09:15:33.646Z] 10483.00 IOPS, 40.95 MiB/s [2024-10-09T09:15:33.646Z] 10410.27 IOPS, 40.67 MiB/s [2024-10-09T09:15:33.646Z] [2024-10-09 11:15:18.533921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.533984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.533991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.644 [2024-10-09 11:15:18.534774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:13.644 [2024-10-09 11:15:18.534786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.534979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.534985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.535870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.535876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.645 [2024-10-09 11:15:18.536180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:13.645 [2024-10-09 11:15:18.536193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.536485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.536490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.646 [2024-10-09 11:15:18.537556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.646 [2024-10-09 11:15:18.537723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:13.646 [2024-10-09 11:15:18.537739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:18.537887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:18.537892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:13.647 10326.75 IOPS, 40.34 MiB/s [2024-10-09T09:15:33.649Z] 9532.38 IOPS, 37.24 MiB/s [2024-10-09T09:15:33.649Z] 8851.50 IOPS, 34.58 MiB/s [2024-10-09T09:15:33.649Z] 8282.80 IOPS, 32.35 MiB/s [2024-10-09T09:15:33.649Z] 8571.50 IOPS, 33.48 MiB/s [2024-10-09T09:15:33.649Z] 8840.35 IOPS, 34.53 MiB/s [2024-10-09T09:15:33.649Z] 9263.83 IOPS, 36.19 MiB/s [2024-10-09T09:15:33.649Z] 9668.89 IOPS, 37.77 MiB/s [2024-10-09T09:15:33.649Z] 9952.60 IOPS, 38.88 MiB/s [2024-10-09T09:15:33.649Z] 10093.95 IOPS, 39.43 MiB/s [2024-10-09T09:15:33.649Z] 10214.05 IOPS, 39.90 MiB/s [2024-10-09T09:15:33.649Z] 10469.61 IOPS, 40.90 MiB/s [2024-10-09T09:15:33.649Z] 10739.50 IOPS, 41.95 MiB/s [2024-10-09T09:15:33.649Z] [2024-10-09 11:15:31.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:31.176851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.176882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:31.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.647 [2024-10-09 11:15:31.177116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:31.177133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.647 [2024-10-09 11:15:31.177149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.647 [2024-10-09 11:15:31.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.647 [2024-10-09 11:15:31.177182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:13.647 [2024-10-09 11:15:31.177988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.647 [2024-10-09 11:15:31.178001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:13.647 10875.48 IOPS, 42.48 MiB/s [2024-10-09T09:15:33.649Z] 10825.96 IOPS, 42.29 MiB/s [2024-10-09T09:15:33.649Z] Received shutdown signal, test time was about 26.809047 seconds 00:35:13.647 00:35:13.647 Latency(us) 00:35:13.647 [2024-10-09T09:15:33.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.647 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:13.647 Verification LBA range: start 0x0 length 0x4000 00:35:13.647 Nvme0n1 : 26.81 10792.65 42.16 0.00 0.00 11841.04 251.47 3012948.08 00:35:13.647 [2024-10-09T09:15:33.649Z] =================================================================================================================== 00:35:13.647 [2024-10-09T09:15:33.649Z] Total : 10792.65 42.16 0.00 0.00 11841.04 251.47 3012948.08 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:13.647 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:13.908 rmmod nvme_tcp 00:35:13.908 rmmod nvme_fabrics 00:35:13.908 rmmod nvme_keyring 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2070692 ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2070692 ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2070692' 00:35:13.908 killing process with pid 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2070692 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.908 11:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.490 11:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.490 00:35:16.490 real 0m40.645s 00:35:16.490 user 1m45.475s 00:35:16.490 sys 0m11.207s 00:35:16.490 11:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:16.490 11:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:16.490 ************************************ 00:35:16.490 END TEST nvmf_host_multipath_status 00:35:16.490 ************************************ 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.490 ************************************ 00:35:16.490 START TEST nvmf_discovery_remove_ifc 00:35:16.490 ************************************ 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:16.490 * Looking for test storage... 00:35:16.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:16.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.490 --rc genhtml_branch_coverage=1 00:35:16.490 --rc genhtml_function_coverage=1 00:35:16.490 --rc genhtml_legend=1 00:35:16.490 --rc geninfo_all_blocks=1 00:35:16.490 --rc geninfo_unexecuted_blocks=1 00:35:16.490 00:35:16.490 ' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:16.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.490 --rc genhtml_branch_coverage=1 00:35:16.490 --rc genhtml_function_coverage=1 00:35:16.490 --rc genhtml_legend=1 00:35:16.490 --rc geninfo_all_blocks=1 00:35:16.490 --rc geninfo_unexecuted_blocks=1 00:35:16.490 00:35:16.490 ' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:16.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.490 --rc genhtml_branch_coverage=1 00:35:16.490 --rc genhtml_function_coverage=1 00:35:16.490 --rc genhtml_legend=1 00:35:16.490 --rc geninfo_all_blocks=1 00:35:16.490 --rc geninfo_unexecuted_blocks=1 00:35:16.490 00:35:16.490 ' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:16.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.490 --rc genhtml_branch_coverage=1 00:35:16.490 --rc genhtml_function_coverage=1 00:35:16.490 --rc genhtml_legend=1 00:35:16.490 --rc geninfo_all_blocks=1 00:35:16.490 --rc geninfo_unexecuted_blocks=1 00:35:16.490 00:35:16.490 ' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.490 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.491 11:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:24.689 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:24.689 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:24.689 Found net devices under 0000:31:00.0: cvl_0_0 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:24.689 Found net devices under 0000:31:00.1: cvl_0_1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:24.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:35:24.689 00:35:24.689 --- 10.0.0.2 ping statistics --- 00:35:24.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.689 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:35:24.689 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:35:24.689 00:35:24.689 --- 10.0.0.1 ping statistics --- 00:35:24.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.690 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2081405 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2081405 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2081405 ']' 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.690 11:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.690 [2024-10-09 11:15:43.914051] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:35:24.690 [2024-10-09 11:15:43.914120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.690 [2024-10-09 11:15:44.056161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:24.690 [2024-10-09 11:15:44.106582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.690 [2024-10-09 11:15:44.132576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.690 [2024-10-09 11:15:44.132623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.690 [2024-10-09 11:15:44.132631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.690 [2024-10-09 11:15:44.132637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.690 [2024-10-09 11:15:44.132644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.690 [2024-10-09 11:15:44.133432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.951 [2024-10-09 11:15:44.788324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.951 [2024-10-09 11:15:44.796548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:24.951 null0 00:35:24.951 [2024-10-09 11:15:44.828438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2081607 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2081607 /tmp/host.sock 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2081607 ']' 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:24.951 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.951 11:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.951 [2024-10-09 11:15:44.904013] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:35:24.951 [2024-10-09 11:15:44.904072] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081607 ] 00:35:25.211 [2024-10-09 11:15:45.037787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:25.212 [2024-10-09 11:15:45.069627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.212 [2024-10-09 11:15:45.093254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.782 11:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.166 [2024-10-09 11:15:46.822669] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:27.166 [2024-10-09 11:15:46.822693] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:27.166 [2024-10-09 11:15:46.822707] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:27.166 [2024-10-09 11:15:46.908778] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:27.166 [2024-10-09 11:15:47.138015] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:27.166 [2024-10-09 11:15:47.138063] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:27.166 [2024-10-09 11:15:47.138088] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:27.166 [2024-10-09 11:15:47.138102] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:27.166 [2024-10-09 11:15:47.138122] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:27.166 [2024-10-09 11:15:47.140923] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20daf90 was disconnected and freed. delete nvme_qpair. 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.166 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:27.427 11:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:28.809 11:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:29.749 11:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:30.691 11:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:31.646 11:15:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.588 [2024-10-09 11:15:52.566100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:32.588 [2024-10-09 11:15:52.566149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.588 [2024-10-09 11:15:52.566162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.588 [2024-10-09 11:15:52.566172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.588 [2024-10-09 11:15:52.566180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.588 [2024-10-09 11:15:52.566189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.588 [2024-10-09 11:15:52.566197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.588 [2024-10-09 11:15:52.566205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.588 [2024-10-09 11:15:52.566212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.588 [2024-10-09 11:15:52.566221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:32.588 [2024-10-09 11:15:52.566228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.588 [2024-10-09 11:15:52.566236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7840 is same with the state(6) to be set 00:35:32.588 [2024-10-09 11:15:52.576095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7840 (9): Bad file descriptor 00:35:32.588 [2024-10-09 11:15:52.586113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.849 11:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.791 [2024-10-09 11:15:53.635494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:33.791 [2024-10-09 11:15:53.635540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b7840 with addr=10.0.0.2, port=4420 00:35:33.791 [2024-10-09 11:15:53.635553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7840 is same with the state(6) to be set 00:35:33.791 [2024-10-09 11:15:53.635582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7840 (9): Bad file descriptor 00:35:33.791 [2024-10-09 11:15:53.635969] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:33.791 [2024-10-09 11:15:53.635995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:33.791 [2024-10-09 11:15:53.636002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:33.791 [2024-10-09 11:15:53.636012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:33.791 [2024-10-09 11:15:53.636029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.791 [2024-10-09 11:15:53.636037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:33.791 11:15:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.792 11:15:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:33.792 11:15:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:34.735 [2024-10-09 11:15:54.636074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:34.735 [2024-10-09 11:15:54.636094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:34.735 [2024-10-09 11:15:54.636102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:34.735 [2024-10-09 11:15:54.636110] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:35:34.735 [2024-10-09 11:15:54.636123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.735 [2024-10-09 11:15:54.636142] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:34.735 [2024-10-09 11:15:54.636166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.735 [2024-10-09 11:15:54.636177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.735 [2024-10-09 11:15:54.636187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.735 [2024-10-09 11:15:54.636195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.735 [2024-10-09 11:15:54.636203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.735 [2024-10-09 11:15:54.636210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.735 [2024-10-09 11:15:54.636219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.735 [2024-10-09 11:15:54.636226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.735 [2024-10-09 11:15:54.636235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.735 [2024-10-09 11:15:54.636242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.735 [2024-10-09 11:15:54.636249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:34.735 [2024-10-09 11:15:54.636595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a6f50 (9): Bad file descriptor 00:35:34.735 [2024-10-09 11:15:54.637605] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:34.735 [2024-10-09 11:15:54.637616] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.735 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.995 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.996 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.996 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.996 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.996 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:34.996 11:15:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:35.938 11:15:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:36.880 [2024-10-09 11:15:56.684771] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:36.880 [2024-10-09 11:15:56.684791] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:36.880 [2024-10-09 11:15:56.684804] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:36.880 [2024-10-09 11:15:56.810914] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:37.141 11:15:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:37.141 [2024-10-09 11:15:57.038410] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:37.141 [2024-10-09 11:15:57.038449] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:37.141 [2024-10-09 11:15:57.038476] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:37.141 [2024-10-09 11:15:57.038490] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:37.141 [2024-10-09 11:15:57.038498] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:37.141 [2024-10-09 11:15:57.041446] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20bbb60 was disconnected and freed. delete nvme_qpair. 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.085 11:15:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2081607 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2081607 ']' 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2081607 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.085 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081607 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081607' 00:35:38.346 killing process with pid 2081607 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2081607 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2081607 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.346 rmmod nvme_tcp 00:35:38.346 rmmod nvme_fabrics 00:35:38.346 rmmod nvme_keyring 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2081405 ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2081405 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2081405 ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2081405 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081405 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081405' 00:35:38.346 killing process with pid 2081405 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2081405 00:35:38.346 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2081405 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.607 11:15:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.520 11:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:40.520 00:35:40.520 real 0m24.463s 00:35:40.520 user 0m29.295s 00:35:40.520 sys 0m7.114s 00:35:40.520 11:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:40.520 11:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.520 ************************************ 00:35:40.520 END TEST nvmf_discovery_remove_ifc 00:35:40.520 ************************************ 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.781 ************************************ 00:35:40.781 START TEST nvmf_identify_kernel_target 00:35:40.781 ************************************ 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:40.781 * Looking for test storage... 00:35:40.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:40.781 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:40.782 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.045 --rc genhtml_branch_coverage=1 00:35:41.045 --rc genhtml_function_coverage=1 00:35:41.045 --rc genhtml_legend=1 00:35:41.045 --rc geninfo_all_blocks=1 00:35:41.045 --rc geninfo_unexecuted_blocks=1 00:35:41.045 00:35:41.045 ' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.045 --rc genhtml_branch_coverage=1 00:35:41.045 --rc genhtml_function_coverage=1 00:35:41.045 --rc genhtml_legend=1 00:35:41.045 --rc geninfo_all_blocks=1 00:35:41.045 --rc geninfo_unexecuted_blocks=1 00:35:41.045 00:35:41.045 ' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.045 --rc genhtml_branch_coverage=1 00:35:41.045 --rc genhtml_function_coverage=1 00:35:41.045 --rc genhtml_legend=1 00:35:41.045 --rc geninfo_all_blocks=1 00:35:41.045 --rc geninfo_unexecuted_blocks=1 00:35:41.045 00:35:41.045 ' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:41.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.045 --rc genhtml_branch_coverage=1 00:35:41.045 --rc genhtml_function_coverage=1 00:35:41.045 --rc genhtml_legend=1 00:35:41.045 --rc geninfo_all_blocks=1 00:35:41.045 --rc geninfo_unexecuted_blocks=1 00:35:41.045 00:35:41.045 ' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:41.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.045 11:16:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:49.181 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:49.181 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:49.181 Found net devices under 0000:31:00.0: cvl_0_0 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:49.181 Found net devices under 0000:31:00.1: cvl_0_1 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.181 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:49.182 11:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:49.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:49.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:35:49.182 00:35:49.182 --- 10.0.0.2 ping statistics --- 00:35:49.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.182 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:49.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:49.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:35:49.182 00:35:49.182 --- 10.0.0.1 ping statistics --- 00:35:49.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:49.182 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:49.182 11:16:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:51.724 Waiting for block devices as requested 00:35:51.724 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:51.724 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:51.984 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:51.984 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:51.984 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:51.984 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:52.245 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:52.245 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:52.245 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:52.506 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:52.506 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:52.506 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:52.767 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:52.767 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:52.767 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:52.767 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:53.027 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:53.288 No valid GPT data, bailing 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.288 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:35:53.551 00:35:53.551 Discovery Log Number of Records 2, Generation counter 2 00:35:53.551 =====Discovery Log Entry 0====== 00:35:53.551 trtype: tcp 00:35:53.551 adrfam: ipv4 00:35:53.551 subtype: current discovery subsystem 00:35:53.551 treq: not specified, sq flow control disable supported 00:35:53.551 portid: 1 00:35:53.551 trsvcid: 4420 00:35:53.551 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.551 traddr: 10.0.0.1 00:35:53.551 eflags: none 00:35:53.551 sectype: none 00:35:53.551 =====Discovery Log Entry 1====== 00:35:53.551 trtype: tcp 00:35:53.551 adrfam: ipv4 00:35:53.551 subtype: nvme subsystem 00:35:53.551 treq: not specified, sq flow control disable supported 00:35:53.551 portid: 1 00:35:53.551 trsvcid: 4420 00:35:53.551 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:53.551 traddr: 10.0.0.1 00:35:53.551 eflags: none 00:35:53.551 sectype: none 00:35:53.551 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:53.551 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:53.551 ===================================================== 00:35:53.551 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:53.551 ===================================================== 00:35:53.551 Controller Capabilities/Features 00:35:53.551 ================================ 00:35:53.551 Vendor ID: 0000 00:35:53.551 Subsystem Vendor ID: 0000 00:35:53.551 Serial Number: 872b806bc8e0f8ec1bfd 00:35:53.551 Model Number: Linux 00:35:53.551 Firmware Version: 6.8.9-20 00:35:53.551 Recommended Arb Burst: 0 00:35:53.551 IEEE OUI Identifier: 00 00 00 00:35:53.551 Multi-path I/O 00:35:53.551 May have multiple subsystem ports: No 00:35:53.551 May have multiple controllers: No 00:35:53.551 Associated with SR-IOV VF: No 00:35:53.551 Max Data Transfer Size: Unlimited 00:35:53.551 Max Number of Namespaces: 0 00:35:53.551 Max Number of I/O Queues: 1024 00:35:53.551 NVMe Specification Version (VS): 1.3 00:35:53.551 NVMe Specification Version (Identify): 1.3 00:35:53.551 Maximum Queue Entries: 1024 00:35:53.551 Contiguous Queues Required: No 00:35:53.551 Arbitration Mechanisms Supported 00:35:53.551 Weighted Round Robin: Not Supported 00:35:53.551 Vendor Specific: Not Supported 00:35:53.551 Reset Timeout: 7500 ms 00:35:53.551 Doorbell Stride: 4 bytes 00:35:53.551 NVM Subsystem Reset: Not Supported 00:35:53.551 Command Sets Supported 00:35:53.551 NVM Command Set: Supported 00:35:53.551 Boot Partition: Not Supported 00:35:53.551 Memory Page Size Minimum: 4096 bytes 00:35:53.551 Memory Page Size Maximum: 4096 bytes 00:35:53.551 Persistent Memory Region: Not Supported 00:35:53.551 Optional Asynchronous Events Supported 00:35:53.551 Namespace Attribute Notices: Not Supported 00:35:53.551 Firmware Activation Notices: Not Supported 00:35:53.551 ANA Change Notices: Not Supported 00:35:53.551 PLE Aggregate Log Change Notices: Not Supported 00:35:53.551 LBA Status Info Alert Notices: Not Supported 00:35:53.551 EGE Aggregate Log Change Notices: Not Supported 00:35:53.551 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.551 Zone Descriptor Change Notices: Not Supported 00:35:53.551 Discovery Log Change Notices: Supported 00:35:53.551 Controller Attributes 00:35:53.551 128-bit Host Identifier: Not Supported 00:35:53.551 Non-Operational Permissive Mode: Not Supported 00:35:53.551 NVM Sets: Not Supported 00:35:53.551 Read Recovery Levels: Not Supported 00:35:53.551 Endurance Groups: Not Supported 00:35:53.551 Predictable Latency Mode: Not Supported 00:35:53.551 Traffic Based Keep ALive: Not Supported 00:35:53.551 Namespace Granularity: Not Supported 00:35:53.551 SQ Associations: Not Supported 00:35:53.551 UUID List: Not Supported 00:35:53.551 Multi-Domain Subsystem: Not Supported 00:35:53.551 Fixed Capacity Management: Not Supported 00:35:53.551 Variable Capacity Management: Not Supported 00:35:53.551 Delete Endurance Group: Not Supported 00:35:53.551 Delete NVM Set: Not Supported 00:35:53.551 Extended LBA Formats Supported: Not Supported 00:35:53.551 Flexible Data Placement Supported: Not Supported 00:35:53.551 00:35:53.551 Controller Memory Buffer Support 00:35:53.551 ================================ 00:35:53.551 Supported: No 00:35:53.551 00:35:53.551 Persistent Memory Region Support 00:35:53.551 ================================ 00:35:53.551 Supported: No 00:35:53.551 00:35:53.551 Admin Command Set Attributes 00:35:53.551 ============================ 00:35:53.551 Security Send/Receive: Not Supported 00:35:53.551 Format NVM: Not Supported 00:35:53.551 Firmware Activate/Download: Not Supported 00:35:53.551 Namespace Management: Not Supported 00:35:53.551 Device Self-Test: Not Supported 00:35:53.551 Directives: Not Supported 00:35:53.551 NVMe-MI: Not Supported 00:35:53.551 Virtualization Management: Not Supported 00:35:53.551 Doorbell Buffer Config: Not Supported 00:35:53.551 Get LBA Status Capability: Not Supported 00:35:53.551 Command & Feature Lockdown Capability: Not Supported 00:35:53.551 Abort Command Limit: 1 00:35:53.551 Async Event Request Limit: 1 00:35:53.551 Number of Firmware Slots: N/A 00:35:53.551 Firmware Slot 1 Read-Only: N/A 00:35:53.551 Firmware Activation Without Reset: N/A 00:35:53.551 Multiple Update Detection Support: N/A 00:35:53.551 Firmware Update Granularity: No Information Provided 00:35:53.551 Per-Namespace SMART Log: No 00:35:53.551 Asymmetric Namespace Access Log Page: Not Supported 00:35:53.551 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:53.551 Command Effects Log Page: Not Supported 00:35:53.551 Get Log Page Extended Data: Supported 00:35:53.551 Telemetry Log Pages: Not Supported 00:35:53.551 Persistent Event Log Pages: Not Supported 00:35:53.551 Supported Log Pages Log Page: May Support 00:35:53.551 Commands Supported & Effects Log Page: Not Supported 00:35:53.551 Feature Identifiers & Effects Log Page:May Support 00:35:53.551 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.551 Data Area 4 for Telemetry Log: Not Supported 00:35:53.551 Error Log Page Entries Supported: 1 00:35:53.551 Keep Alive: Not Supported 00:35:53.551 00:35:53.551 NVM Command Set Attributes 00:35:53.551 ========================== 00:35:53.551 Submission Queue Entry Size 00:35:53.551 Max: 1 00:35:53.551 Min: 1 00:35:53.551 Completion Queue Entry Size 00:35:53.551 Max: 1 00:35:53.551 Min: 1 00:35:53.551 Number of Namespaces: 0 00:35:53.551 Compare Command: Not Supported 00:35:53.551 Write Uncorrectable Command: Not Supported 00:35:53.551 Dataset Management Command: Not Supported 00:35:53.551 Write Zeroes Command: Not Supported 00:35:53.551 Set Features Save Field: Not Supported 00:35:53.551 Reservations: Not Supported 00:35:53.551 Timestamp: Not Supported 00:35:53.551 Copy: Not Supported 00:35:53.551 Volatile Write Cache: Not Present 00:35:53.551 Atomic Write Unit (Normal): 1 00:35:53.551 Atomic Write Unit (PFail): 1 00:35:53.551 Atomic Compare & Write Unit: 1 00:35:53.551 Fused Compare & Write: Not Supported 00:35:53.551 Scatter-Gather List 00:35:53.551 SGL Command Set: Supported 00:35:53.551 SGL Keyed: Not Supported 00:35:53.551 SGL Bit Bucket Descriptor: Not Supported 00:35:53.551 SGL Metadata Pointer: Not Supported 00:35:53.551 Oversized SGL: Not Supported 00:35:53.551 SGL Metadata Address: Not Supported 00:35:53.551 SGL Offset: Supported 00:35:53.551 Transport SGL Data Block: Not Supported 00:35:53.551 Replay Protected Memory Block: Not Supported 00:35:53.551 00:35:53.551 Firmware Slot Information 00:35:53.551 ========================= 00:35:53.551 Active slot: 0 00:35:53.551 00:35:53.551 00:35:53.551 Error Log 00:35:53.551 ========= 00:35:53.551 00:35:53.551 Active Namespaces 00:35:53.551 ================= 00:35:53.551 Discovery Log Page 00:35:53.551 ================== 00:35:53.551 Generation Counter: 2 00:35:53.551 Number of Records: 2 00:35:53.551 Record Format: 0 00:35:53.551 00:35:53.551 Discovery Log Entry 0 00:35:53.551 ---------------------- 00:35:53.551 Transport Type: 3 (TCP) 00:35:53.551 Address Family: 1 (IPv4) 00:35:53.551 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:53.551 Entry Flags: 00:35:53.551 Duplicate Returned Information: 0 00:35:53.551 Explicit Persistent Connection Support for Discovery: 0 00:35:53.551 Transport Requirements: 00:35:53.551 Secure Channel: Not Specified 00:35:53.551 Port ID: 1 (0x0001) 00:35:53.551 Controller ID: 65535 (0xffff) 00:35:53.552 Admin Max SQ Size: 32 00:35:53.552 Transport Service Identifier: 4420 00:35:53.552 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:53.552 Transport Address: 10.0.0.1 00:35:53.552 Discovery Log Entry 1 00:35:53.552 ---------------------- 00:35:53.552 Transport Type: 3 (TCP) 00:35:53.552 Address Family: 1 (IPv4) 00:35:53.552 Subsystem Type: 2 (NVM Subsystem) 00:35:53.552 Entry Flags: 00:35:53.552 Duplicate Returned Information: 0 00:35:53.552 Explicit Persistent Connection Support for Discovery: 0 00:35:53.552 Transport Requirements: 00:35:53.552 Secure Channel: Not Specified 00:35:53.552 Port ID: 1 (0x0001) 00:35:53.552 Controller ID: 65535 (0xffff) 00:35:53.552 Admin Max SQ Size: 32 00:35:53.552 Transport Service Identifier: 4420 00:35:53.552 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:53.552 Transport Address: 10.0.0.1 00:35:53.552 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.813 get_feature(0x01) failed 00:35:53.813 get_feature(0x02) failed 00:35:53.813 get_feature(0x04) failed 00:35:53.813 ===================================================== 00:35:53.813 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:53.813 ===================================================== 00:35:53.813 Controller Capabilities/Features 00:35:53.813 ================================ 00:35:53.813 Vendor ID: 0000 00:35:53.813 Subsystem Vendor ID: 0000 00:35:53.813 Serial Number: 3723d2fcd51b21dae4b8 00:35:53.813 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.813 Firmware Version: 6.8.9-20 00:35:53.813 Recommended Arb Burst: 6 00:35:53.813 IEEE OUI Identifier: 00 00 00 00:35:53.813 Multi-path I/O 00:35:53.813 May have multiple subsystem ports: Yes 00:35:53.813 May have multiple controllers: Yes 00:35:53.813 Associated with SR-IOV VF: No 00:35:53.813 Max Data Transfer Size: Unlimited 00:35:53.813 Max Number of Namespaces: 1024 00:35:53.813 Max Number of I/O Queues: 128 00:35:53.813 NVMe Specification Version (VS): 1.3 00:35:53.813 NVMe Specification Version (Identify): 1.3 00:35:53.813 Maximum Queue Entries: 1024 00:35:53.813 Contiguous Queues Required: No 00:35:53.813 Arbitration Mechanisms Supported 00:35:53.813 Weighted Round Robin: Not Supported 00:35:53.813 Vendor Specific: Not Supported 00:35:53.813 Reset Timeout: 7500 ms 00:35:53.813 Doorbell Stride: 4 bytes 00:35:53.813 NVM Subsystem Reset: Not Supported 00:35:53.813 Command Sets Supported 00:35:53.813 NVM Command Set: Supported 00:35:53.813 Boot Partition: Not Supported 00:35:53.813 Memory Page Size Minimum: 4096 bytes 00:35:53.813 Memory Page Size Maximum: 4096 bytes 00:35:53.813 Persistent Memory Region: Not Supported 00:35:53.813 Optional Asynchronous Events Supported 00:35:53.813 Namespace Attribute Notices: Supported 00:35:53.813 Firmware Activation Notices: Not Supported 00:35:53.813 ANA Change Notices: Supported 00:35:53.813 PLE Aggregate Log Change Notices: Not Supported 00:35:53.813 LBA Status Info Alert Notices: Not Supported 00:35:53.813 EGE Aggregate Log Change Notices: Not Supported 00:35:53.813 Normal NVM Subsystem Shutdown event: Not Supported 00:35:53.813 Zone Descriptor Change Notices: Not Supported 00:35:53.813 Discovery Log Change Notices: Not Supported 00:35:53.813 Controller Attributes 00:35:53.813 128-bit Host Identifier: Supported 00:35:53.813 Non-Operational Permissive Mode: Not Supported 00:35:53.813 NVM Sets: Not Supported 00:35:53.813 Read Recovery Levels: Not Supported 00:35:53.813 Endurance Groups: Not Supported 00:35:53.813 Predictable Latency Mode: Not Supported 00:35:53.813 Traffic Based Keep ALive: Supported 00:35:53.813 Namespace Granularity: Not Supported 00:35:53.813 SQ Associations: Not Supported 00:35:53.813 UUID List: Not Supported 00:35:53.813 Multi-Domain Subsystem: Not Supported 00:35:53.813 Fixed Capacity Management: Not Supported 00:35:53.813 Variable Capacity Management: Not Supported 00:35:53.813 Delete Endurance Group: Not Supported 00:35:53.813 Delete NVM Set: Not Supported 00:35:53.813 Extended LBA Formats Supported: Not Supported 00:35:53.813 Flexible Data Placement Supported: Not Supported 00:35:53.813 00:35:53.813 Controller Memory Buffer Support 00:35:53.813 ================================ 00:35:53.813 Supported: No 00:35:53.813 00:35:53.813 Persistent Memory Region Support 00:35:53.813 ================================ 00:35:53.813 Supported: No 00:35:53.813 00:35:53.814 Admin Command Set Attributes 00:35:53.814 ============================ 00:35:53.814 Security Send/Receive: Not Supported 00:35:53.814 Format NVM: Not Supported 00:35:53.814 Firmware Activate/Download: Not Supported 00:35:53.814 Namespace Management: Not Supported 00:35:53.814 Device Self-Test: Not Supported 00:35:53.814 Directives: Not Supported 00:35:53.814 NVMe-MI: Not Supported 00:35:53.814 Virtualization Management: Not Supported 00:35:53.814 Doorbell Buffer Config: Not Supported 00:35:53.814 Get LBA Status Capability: Not Supported 00:35:53.814 Command & Feature Lockdown Capability: Not Supported 00:35:53.814 Abort Command Limit: 4 00:35:53.814 Async Event Request Limit: 4 00:35:53.814 Number of Firmware Slots: N/A 00:35:53.814 Firmware Slot 1 Read-Only: N/A 00:35:53.814 Firmware Activation Without Reset: N/A 00:35:53.814 Multiple Update Detection Support: N/A 00:35:53.814 Firmware Update Granularity: No Information Provided 00:35:53.814 Per-Namespace SMART Log: Yes 00:35:53.814 Asymmetric Namespace Access Log Page: Supported 00:35:53.814 ANA Transition Time : 10 sec 00:35:53.814 00:35:53.814 Asymmetric Namespace Access Capabilities 00:35:53.814 ANA Optimized State : Supported 00:35:53.814 ANA Non-Optimized State : Supported 00:35:53.814 ANA Inaccessible State : Supported 00:35:53.814 ANA Persistent Loss State : Supported 00:35:53.814 ANA Change State : Supported 00:35:53.814 ANAGRPID is not changed : No 00:35:53.814 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:53.814 00:35:53.814 ANA Group Identifier Maximum : 128 00:35:53.814 Number of ANA Group Identifiers : 128 00:35:53.814 Max Number of Allowed Namespaces : 1024 00:35:53.814 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:53.814 Command Effects Log Page: Supported 00:35:53.814 Get Log Page Extended Data: Supported 00:35:53.814 Telemetry Log Pages: Not Supported 00:35:53.814 Persistent Event Log Pages: Not Supported 00:35:53.814 Supported Log Pages Log Page: May Support 00:35:53.814 Commands Supported & Effects Log Page: Not Supported 00:35:53.814 Feature Identifiers & Effects Log Page:May Support 00:35:53.814 NVMe-MI Commands & Effects Log Page: May Support 00:35:53.814 Data Area 4 for Telemetry Log: Not Supported 00:35:53.814 Error Log Page Entries Supported: 128 00:35:53.814 Keep Alive: Supported 00:35:53.814 Keep Alive Granularity: 1000 ms 00:35:53.814 00:35:53.814 NVM Command Set Attributes 00:35:53.814 ========================== 00:35:53.814 Submission Queue Entry Size 00:35:53.814 Max: 64 00:35:53.814 Min: 64 00:35:53.814 Completion Queue Entry Size 00:35:53.814 Max: 16 00:35:53.814 Min: 16 00:35:53.814 Number of Namespaces: 1024 00:35:53.814 Compare Command: Not Supported 00:35:53.814 Write Uncorrectable Command: Not Supported 00:35:53.814 Dataset Management Command: Supported 00:35:53.814 Write Zeroes Command: Supported 00:35:53.814 Set Features Save Field: Not Supported 00:35:53.814 Reservations: Not Supported 00:35:53.814 Timestamp: Not Supported 00:35:53.814 Copy: Not Supported 00:35:53.814 Volatile Write Cache: Present 00:35:53.814 Atomic Write Unit (Normal): 1 00:35:53.814 Atomic Write Unit (PFail): 1 00:35:53.814 Atomic Compare & Write Unit: 1 00:35:53.814 Fused Compare & Write: Not Supported 00:35:53.814 Scatter-Gather List 00:35:53.814 SGL Command Set: Supported 00:35:53.814 SGL Keyed: Not Supported 00:35:53.814 SGL Bit Bucket Descriptor: Not Supported 00:35:53.814 SGL Metadata Pointer: Not Supported 00:35:53.814 Oversized SGL: Not Supported 00:35:53.814 SGL Metadata Address: Not Supported 00:35:53.814 SGL Offset: Supported 00:35:53.814 Transport SGL Data Block: Not Supported 00:35:53.814 Replay Protected Memory Block: Not Supported 00:35:53.814 00:35:53.814 Firmware Slot Information 00:35:53.814 ========================= 00:35:53.814 Active slot: 0 00:35:53.814 00:35:53.814 Asymmetric Namespace Access 00:35:53.814 =========================== 00:35:53.814 Change Count : 0 00:35:53.814 Number of ANA Group Descriptors : 1 00:35:53.814 ANA Group Descriptor : 0 00:35:53.814 ANA Group ID : 1 00:35:53.814 Number of NSID Values : 1 00:35:53.814 Change Count : 0 00:35:53.814 ANA State : 1 00:35:53.814 Namespace Identifier : 1 00:35:53.814 00:35:53.814 Commands Supported and Effects 00:35:53.814 ============================== 00:35:53.814 Admin Commands 00:35:53.814 -------------- 00:35:53.814 Get Log Page (02h): Supported 00:35:53.814 Identify (06h): Supported 00:35:53.814 Abort (08h): Supported 00:35:53.814 Set Features (09h): Supported 00:35:53.814 Get Features (0Ah): Supported 00:35:53.814 Asynchronous Event Request (0Ch): Supported 00:35:53.814 Keep Alive (18h): Supported 00:35:53.814 I/O Commands 00:35:53.814 ------------ 00:35:53.814 Flush (00h): Supported 00:35:53.814 Write (01h): Supported LBA-Change 00:35:53.814 Read (02h): Supported 00:35:53.814 Write Zeroes (08h): Supported LBA-Change 00:35:53.814 Dataset Management (09h): Supported 00:35:53.814 00:35:53.814 Error Log 00:35:53.814 ========= 00:35:53.814 Entry: 0 00:35:53.814 Error Count: 0x3 00:35:53.814 Submission Queue Id: 0x0 00:35:53.814 Command Id: 0x5 00:35:53.814 Phase Bit: 0 00:35:53.814 Status Code: 0x2 00:35:53.814 Status Code Type: 0x0 00:35:53.814 Do Not Retry: 1 00:35:53.814 Error Location: 0x28 00:35:53.814 LBA: 0x0 00:35:53.814 Namespace: 0x0 00:35:53.814 Vendor Log Page: 0x0 00:35:53.814 ----------- 00:35:53.814 Entry: 1 00:35:53.814 Error Count: 0x2 00:35:53.814 Submission Queue Id: 0x0 00:35:53.814 Command Id: 0x5 00:35:53.814 Phase Bit: 0 00:35:53.814 Status Code: 0x2 00:35:53.814 Status Code Type: 0x0 00:35:53.814 Do Not Retry: 1 00:35:53.814 Error Location: 0x28 00:35:53.814 LBA: 0x0 00:35:53.814 Namespace: 0x0 00:35:53.814 Vendor Log Page: 0x0 00:35:53.814 ----------- 00:35:53.814 Entry: 2 00:35:53.814 Error Count: 0x1 00:35:53.814 Submission Queue Id: 0x0 00:35:53.814 Command Id: 0x4 00:35:53.814 Phase Bit: 0 00:35:53.814 Status Code: 0x2 00:35:53.814 Status Code Type: 0x0 00:35:53.814 Do Not Retry: 1 00:35:53.814 Error Location: 0x28 00:35:53.814 LBA: 0x0 00:35:53.814 Namespace: 0x0 00:35:53.814 Vendor Log Page: 0x0 00:35:53.814 00:35:53.814 Number of Queues 00:35:53.814 ================ 00:35:53.814 Number of I/O Submission Queues: 128 00:35:53.814 Number of I/O Completion Queues: 128 00:35:53.814 00:35:53.814 ZNS Specific Controller Data 00:35:53.814 ============================ 00:35:53.814 Zone Append Size Limit: 0 00:35:53.814 00:35:53.814 00:35:53.814 Active Namespaces 00:35:53.814 ================= 00:35:53.814 get_feature(0x05) failed 00:35:53.814 Namespace ID:1 00:35:53.814 Command Set Identifier: NVM (00h) 00:35:53.814 Deallocate: Supported 00:35:53.814 Deallocated/Unwritten Error: Not Supported 00:35:53.814 Deallocated Read Value: Unknown 00:35:53.814 Deallocate in Write Zeroes: Not Supported 00:35:53.814 Deallocated Guard Field: 0xFFFF 00:35:53.814 Flush: Supported 00:35:53.814 Reservation: Not Supported 00:35:53.814 Namespace Sharing Capabilities: Multiple Controllers 00:35:53.814 Size (in LBAs): 3750748848 (1788GiB) 00:35:53.814 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:53.814 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:53.814 UUID: 8b3cc692-aa4b-4482-8b7f-b72e80170632 00:35:53.814 Thin Provisioning: Not Supported 00:35:53.814 Per-NS Atomic Units: Yes 00:35:53.814 Atomic Write Unit (Normal): 8 00:35:53.814 Atomic Write Unit (PFail): 8 00:35:53.814 Preferred Write Granularity: 8 00:35:53.814 Atomic Compare & Write Unit: 8 00:35:53.814 Atomic Boundary Size (Normal): 0 00:35:53.814 Atomic Boundary Size (PFail): 0 00:35:53.814 Atomic Boundary Offset: 0 00:35:53.814 NGUID/EUI64 Never Reused: No 00:35:53.814 ANA group ID: 1 00:35:53.814 Namespace Write Protected: No 00:35:53.814 Number of LBA Formats: 1 00:35:53.814 Current LBA Format: LBA Format #00 00:35:53.814 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:53.814 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.814 rmmod nvme_tcp 00:35:53.814 rmmod nvme_fabrics 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:53.814 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.815 11:16:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:35:56.366 11:16:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:59.672 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:59.672 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:59.932 00:35:59.932 real 0m19.251s 00:35:59.932 user 0m5.279s 00:35:59.932 sys 0m10.920s 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:59.932 ************************************ 00:35:59.932 END TEST nvmf_identify_kernel_target 00:35:59.932 ************************************ 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.932 ************************************ 00:35:59.932 START TEST nvmf_auth_host 00:35:59.932 ************************************ 00:35:59.932 11:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:00.193 * Looking for test storage... 00:36:00.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:00.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.193 --rc genhtml_branch_coverage=1 00:36:00.193 --rc genhtml_function_coverage=1 00:36:00.193 --rc genhtml_legend=1 00:36:00.193 --rc geninfo_all_blocks=1 00:36:00.193 --rc geninfo_unexecuted_blocks=1 00:36:00.193 00:36:00.193 ' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:00.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.193 --rc genhtml_branch_coverage=1 00:36:00.193 --rc genhtml_function_coverage=1 00:36:00.193 --rc genhtml_legend=1 00:36:00.193 --rc geninfo_all_blocks=1 00:36:00.193 --rc geninfo_unexecuted_blocks=1 00:36:00.193 00:36:00.193 ' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:00.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.193 --rc genhtml_branch_coverage=1 00:36:00.193 --rc genhtml_function_coverage=1 00:36:00.193 --rc genhtml_legend=1 00:36:00.193 --rc geninfo_all_blocks=1 00:36:00.193 --rc geninfo_unexecuted_blocks=1 00:36:00.193 00:36:00.193 ' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:00.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.193 --rc genhtml_branch_coverage=1 00:36:00.193 --rc genhtml_function_coverage=1 00:36:00.193 --rc genhtml_legend=1 00:36:00.193 --rc geninfo_all_blocks=1 00:36:00.193 --rc geninfo_unexecuted_blocks=1 00:36:00.193 00:36:00.193 ' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.193 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:00.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.194 11:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.332 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:08.332 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:08.332 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:08.332 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:08.332 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:08.333 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:08.333 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:08.333 Found net devices under 0000:31:00.0: cvl_0_0 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:08.333 Found net devices under 0000:31:00.1: cvl_0_1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:08.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:08.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:36:08.333 00:36:08.333 --- 10.0.0.2 ping statistics --- 00:36:08.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.333 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:08.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:08.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:36:08.333 00:36:08.333 --- 10.0.0.1 ping statistics --- 00:36:08.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:08.333 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:08.333 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2096252 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2096252 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2096252 ']' 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.334 11:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7cf2ab68a2d6483b81d78b0df3dea473 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Db7 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7cf2ab68a2d6483b81d78b0df3dea473 0 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7cf2ab68a2d6483b81d78b0df3dea473 0 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7cf2ab68a2d6483b81d78b0df3dea473 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Db7 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Db7 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Db7 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:08.594 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6c62537760bb1c9d7dae06c1f02b38b99c3dacb2c45b95cbd03438873c282dfb 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.zKP 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6c62537760bb1c9d7dae06c1f02b38b99c3dacb2c45b95cbd03438873c282dfb 3 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6c62537760bb1c9d7dae06c1f02b38b99c3dacb2c45b95cbd03438873c282dfb 3 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6c62537760bb1c9d7dae06c1f02b38b99c3dacb2c45b95cbd03438873c282dfb 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.zKP 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.zKP 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.zKP 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c1cb3bb9cfde904609a87889afbc9471910956eb8ccab5fc 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.NDm 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c1cb3bb9cfde904609a87889afbc9471910956eb8ccab5fc 0 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c1cb3bb9cfde904609a87889afbc9471910956eb8ccab5fc 0 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c1cb3bb9cfde904609a87889afbc9471910956eb8ccab5fc 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.NDm 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.NDm 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NDm 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3fbb6289d9d125ac79f422e4813678c158cdda4f74d108b0 00:36:08.595 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.zsR 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3fbb6289d9d125ac79f422e4813678c158cdda4f74d108b0 2 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3fbb6289d9d125ac79f422e4813678c158cdda4f74d108b0 2 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3fbb6289d9d125ac79f422e4813678c158cdda4f74d108b0 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.zsR 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.zsR 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zsR 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2f5a79c00d59d8be00b25cdcc79948ef 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.7KU 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2f5a79c00d59d8be00b25cdcc79948ef 1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2f5a79c00d59d8be00b25cdcc79948ef 1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2f5a79c00d59d8be00b25cdcc79948ef 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.7KU 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.7KU 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7KU 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7f7c2b0bd5ffee78ae766c1098357085 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.vo7 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7f7c2b0bd5ffee78ae766c1098357085 1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7f7c2b0bd5ffee78ae766c1098357085 1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7f7c2b0bd5ffee78ae766c1098357085 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.vo7 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.vo7 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.vo7 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.856 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=28921f3155960f0c305136193a02cd81aefc529fb8e23a50 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.xtv 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 28921f3155960f0c305136193a02cd81aefc529fb8e23a50 2 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 28921f3155960f0c305136193a02cd81aefc529fb8e23a50 2 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=28921f3155960f0c305136193a02cd81aefc529fb8e23a50 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.xtv 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.xtv 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xtv 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d5a49680f7f05762cdb5ae7494f3f98b 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.zr1 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d5a49680f7f05762cdb5ae7494f3f98b 0 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d5a49680f7f05762cdb5ae7494f3f98b 0 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d5a49680f7f05762cdb5ae7494f3f98b 00:36:08.857 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.zr1 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.zr1 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zr1 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=85f3d6d82198bea3d0a3fc5dc601af78e321fc91f59b54bb3b1070bbf9a62bb9 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.gcE 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 85f3d6d82198bea3d0a3fc5dc601af78e321fc91f59b54bb3b1070bbf9a62bb9 3 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 85f3d6d82198bea3d0a3fc5dc601af78e321fc91f59b54bb3b1070bbf9a62bb9 3 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=85f3d6d82198bea3d0a3fc5dc601af78e321fc91f59b54bb3b1070bbf9a62bb9 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.gcE 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.gcE 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gcE 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2096252 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2096252 ']' 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:09.117 11:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Db7 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.zKP ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zKP 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NDm 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zsR ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zsR 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7KU 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.vo7 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.vo7 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xtv 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zr1 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zr1 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gcE 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:09.378 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:09.379 11:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:12.675 Waiting for block devices as requested 00:36:12.675 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:12.675 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:12.936 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:12.936 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:12.936 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:12.936 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:13.196 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:13.196 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:13.196 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:13.457 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:13.457 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:13.457 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:13.716 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:13.716 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:13.716 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:13.977 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:13.977 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:14.918 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:14.918 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:14.918 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:14.919 No valid GPT data, bailing 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:14.919 00:36:14.919 Discovery Log Number of Records 2, Generation counter 2 00:36:14.919 =====Discovery Log Entry 0====== 00:36:14.919 trtype: tcp 00:36:14.919 adrfam: ipv4 00:36:14.919 subtype: current discovery subsystem 00:36:14.919 treq: not specified, sq flow control disable supported 00:36:14.919 portid: 1 00:36:14.919 trsvcid: 4420 00:36:14.919 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:14.919 traddr: 10.0.0.1 00:36:14.919 eflags: none 00:36:14.919 sectype: none 00:36:14.919 =====Discovery Log Entry 1====== 00:36:14.919 trtype: tcp 00:36:14.919 adrfam: ipv4 00:36:14.919 subtype: nvme subsystem 00:36:14.919 treq: not specified, sq flow control disable supported 00:36:14.919 portid: 1 00:36:14.919 trsvcid: 4420 00:36:14.919 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:14.919 traddr: 10.0.0.1 00:36:14.919 eflags: none 00:36:14.919 sectype: none 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.919 11:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.180 nvme0n1 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.180 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.441 nvme0n1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.441 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.703 nvme0n1 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.703 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.971 nvme0n1 00:36:15.971 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.971 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.971 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.972 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.973 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 nvme0n1 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 11:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 nvme0n1 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.236 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.497 nvme0n1 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.497 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.783 nvme0n1 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.783 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.138 11:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.138 nvme0n1 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.138 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.139 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.139 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.139 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:17.139 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.139 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.471 nvme0n1 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.471 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.472 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.733 nvme0n1 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.733 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.994 nvme0n1 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.994 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:18.254 11:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.254 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.514 nvme0n1 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.514 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.774 nvme0n1 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.774 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.775 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.035 nvme0n1 00:36:19.035 11:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.035 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.035 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.035 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.035 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.035 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.295 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.556 nvme0n1 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.556 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.128 nvme0n1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.128 11:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.698 nvme0n1 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:20.698 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.699 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.699 11:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.269 nvme0n1 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.269 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.842 nvme0n1 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.842 11:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.102 nvme0n1 00:36:22.102 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.364 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.937 nvme0n1 00:36:22.937 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.197 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.198 11:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.198 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.770 nvme0n1 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.770 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.771 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.032 11:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.602 nvme0n1 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.602 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.603 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:24.603 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.603 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.603 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.864 11:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.435 nvme0n1 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.435 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.696 11:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.266 nvme0n1 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.266 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.267 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.527 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.527 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.527 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:26.527 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.528 nvme0n1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.528 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.789 nvme0n1 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.789 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.051 nvme0n1 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.051 11:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.312 nvme0n1 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.312 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.573 nvme0n1 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.573 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.833 nvme0n1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.833 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.094 nvme0n1 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.094 11:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.354 nvme0n1 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.354 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.355 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.615 nvme0n1 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.615 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.876 nvme0n1 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.877 11:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.137 nvme0n1 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.137 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:29.397 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.398 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.658 nvme0n1 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.658 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.918 nvme0n1 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:29.918 11:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.179 nvme0n1 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.179 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:30.439 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.440 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.700 nvme0n1 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.700 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.701 11:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.270 nvme0n1 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.270 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.271 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.840 nvme0n1 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.840 11:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.410 nvme0n1 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.410 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.671 nvme0n1 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.671 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.932 11:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.193 nvme0n1 00:36:33.193 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.193 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.193 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.193 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.193 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.453 11:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.024 nvme0n1 00:36:34.024 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:34.285 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.286 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.856 nvme0n1 00:36:34.857 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.117 11:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.061 nvme0n1 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.061 11:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.634 nvme0n1 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.634 11:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.575 nvme0n1 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.575 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.836 nvme0n1 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.836 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.837 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.098 nvme0n1 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:38.098 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.099 11:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.099 nvme0n1 00:36:38.099 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.099 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.099 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.099 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.099 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.359 nvme0n1 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.359 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.360 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.360 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.360 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.620 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.621 nvme0n1 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.621 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.881 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.881 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.881 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.881 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:38.881 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.882 nvme0n1 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.882 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.142 11:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.142 nvme0n1 00:36:39.142 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.142 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.142 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.142 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.143 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.143 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.405 nvme0n1 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.405 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.666 nvme0n1 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.666 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.928 nvme0n1 00:36:39.928 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.190 11:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.190 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.451 nvme0n1 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:40.451 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:40.452 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:40.452 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.452 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.713 nvme0n1 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:40.713 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.714 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.974 nvme0n1 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.235 11:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:41.235 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.236 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.497 nvme0n1 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.497 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.498 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.759 nvme0n1 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:41.759 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.020 11:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.280 nvme0n1 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.280 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.541 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.542 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.801 nvme0n1 00:36:42.802 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.802 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.802 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.802 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.802 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.062 11:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.634 nvme0n1 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.634 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.894 nvme0n1 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.894 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.154 11:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.414 nvme0n1 00:36:44.414 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmMmFiNjhhMmQ2NDgzYjgxZDc4YjBkZjNkZWE0NzOQT+eg: 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmM2MjUzNzc2MGJiMWM5ZDdkYWUwNmMxZjAyYjM4Yjk5YzNkYWNiMmM0NWI5NWNiZDAzNDM4ODczYzI4MmRmYqlnfp0=: 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.675 11:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.616 nvme0n1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.616 11:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.185 nvme0n1 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.185 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.124 nvme0n1 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjFmMzE1NTk2MGYwYzMwNTEzNjE5M2EwMmNkODFhZWZjNTI5ZmI4ZTIzYTUwOGPg0A==: 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: ]] 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDVhNDk2ODBmN2YwNTc2MmNkYjVhZTc0OTRmM2Y5OGLvIL+q: 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.125 11:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.125 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.695 nvme0n1 00:36:47.695 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.695 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.695 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.695 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.695 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVmM2Q2ZDgyMTk4YmVhM2QwYTNmYzVkYzYwMWFmNzhlMzIxZmM5MWY1OWI1NGJiM2IxMDcwYmJmOWE2MmJiOdfdwAE=: 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.956 11:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.526 nvme0n1 00:36:48.526 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.526 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.526 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.526 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.526 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:48.786 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.787 request: 00:36:48.787 { 00:36:48.787 "name": "nvme0", 00:36:48.787 "trtype": "tcp", 00:36:48.787 "traddr": "10.0.0.1", 00:36:48.787 "adrfam": "ipv4", 00:36:48.787 "trsvcid": "4420", 00:36:48.787 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:48.787 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:48.787 "prchk_reftag": false, 00:36:48.787 "prchk_guard": false, 00:36:48.787 "hdgst": false, 00:36:48.787 "ddgst": false, 00:36:48.787 "allow_unrecognized_csi": false, 00:36:48.787 "method": "bdev_nvme_attach_controller", 00:36:48.787 "req_id": 1 00:36:48.787 } 00:36:48.787 Got JSON-RPC error response 00:36:48.787 response: 00:36:48.787 { 00:36:48.787 "code": -5, 00:36:48.787 "message": "Input/output error" 00:36:48.787 } 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.787 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.787 request: 00:36:48.787 { 00:36:48.787 "name": "nvme0", 00:36:48.787 "trtype": "tcp", 00:36:48.787 "traddr": "10.0.0.1", 00:36:48.787 "adrfam": "ipv4", 00:36:48.787 "trsvcid": "4420", 00:36:48.787 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:48.787 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:48.787 "prchk_reftag": false, 00:36:48.787 "prchk_guard": false, 00:36:48.787 "hdgst": false, 00:36:48.787 "ddgst": false, 00:36:48.787 "dhchap_key": "key2", 00:36:48.787 "allow_unrecognized_csi": false, 00:36:48.787 "method": "bdev_nvme_attach_controller", 00:36:48.787 "req_id": 1 00:36:48.787 } 00:36:48.787 Got JSON-RPC error response 00:36:48.787 response: 00:36:48.787 { 00:36:48.787 "code": -5, 00:36:48.787 "message": "Input/output error" 00:36:48.787 } 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.048 request: 00:36:49.048 { 00:36:49.048 "name": "nvme0", 00:36:49.048 "trtype": "tcp", 00:36:49.048 "traddr": "10.0.0.1", 00:36:49.048 "adrfam": "ipv4", 00:36:49.048 "trsvcid": "4420", 00:36:49.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:49.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:49.048 "prchk_reftag": false, 00:36:49.048 "prchk_guard": false, 00:36:49.048 "hdgst": false, 00:36:49.048 "ddgst": false, 00:36:49.048 "dhchap_key": "key1", 00:36:49.048 "dhchap_ctrlr_key": "ckey2", 00:36:49.048 "allow_unrecognized_csi": false, 00:36:49.048 "method": "bdev_nvme_attach_controller", 00:36:49.048 "req_id": 1 00:36:49.048 } 00:36:49.048 Got JSON-RPC error response 00:36:49.048 response: 00:36:49.048 { 00:36:49.048 "code": -5, 00:36:49.048 "message": "Input/output error" 00:36:49.048 } 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.048 11:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.309 nvme0n1 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.309 request: 00:36:49.309 { 00:36:49.309 "name": "nvme0", 00:36:49.309 "dhchap_key": "key1", 00:36:49.309 "dhchap_ctrlr_key": "ckey2", 00:36:49.309 "method": "bdev_nvme_set_keys", 00:36:49.309 "req_id": 1 00:36:49.309 } 00:36:49.309 Got JSON-RPC error response 00:36:49.309 response: 00:36:49.309 { 00:36:49.309 "code": -13, 00:36:49.309 "message": "Permission denied" 00:36:49.309 } 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:49.309 11:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:50.690 11:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjYjNiYjljZmRlOTA0NjA5YTg3ODg5YWZiYzk0NzE5MTA5NTZlYjhjY2FiNWZjBjz4Rg==: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2ZiYjYyODlkOWQxMjVhYzc5ZjQyMmU0ODEzNjc4YzE1OGNkZGE0Zjc0ZDEwOGIwD6TZVg==: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.637 nvme0n1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmY1YTc5YzAwZDU5ZDhiZTAwYjI1Y2RjYzc5OTQ4ZWYgFHKp: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2Y3YzJiMGJkNWZmZWU3OGFlNzY2YzEwOTgzNTcwODXAcEq8: 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.637 request: 00:36:51.637 { 00:36:51.637 "name": "nvme0", 00:36:51.637 "dhchap_key": "key2", 00:36:51.637 "dhchap_ctrlr_key": "ckey1", 00:36:51.637 "method": "bdev_nvme_set_keys", 00:36:51.637 "req_id": 1 00:36:51.637 } 00:36:51.637 Got JSON-RPC error response 00:36:51.637 response: 00:36:51.637 { 00:36:51.637 "code": -13, 00:36:51.637 "message": "Permission denied" 00:36:51.637 } 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.637 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.897 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:51.897 11:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:52.833 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:52.834 rmmod nvme_tcp 00:36:52.834 rmmod nvme_fabrics 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2096252 ']' 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2096252 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2096252 ']' 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2096252 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:52.834 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2096252 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2096252' 00:36:53.093 killing process with pid 2096252 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2096252 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2096252 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.093 11:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:55.030 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:55.308 11:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:58.617 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.617 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.617 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.617 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.876 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:59.445 11:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Db7 /tmp/spdk.key-null.NDm /tmp/spdk.key-sha256.7KU /tmp/spdk.key-sha384.xtv /tmp/spdk.key-sha512.gcE /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:59.445 11:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:02.746 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:02.746 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:02.746 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:03.006 00:37:03.006 real 1m2.995s 00:37:03.006 user 0m56.700s 00:37:03.006 sys 0m15.772s 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.006 ************************************ 00:37:03.006 END TEST nvmf_auth_host 00:37:03.006 ************************************ 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:03.006 11:17:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.006 ************************************ 00:37:03.006 START TEST nvmf_digest 00:37:03.006 ************************************ 00:37:03.006 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:03.268 * Looking for test storage... 00:37:03.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.268 --rc genhtml_branch_coverage=1 00:37:03.268 --rc genhtml_function_coverage=1 00:37:03.268 --rc genhtml_legend=1 00:37:03.268 --rc geninfo_all_blocks=1 00:37:03.268 --rc geninfo_unexecuted_blocks=1 00:37:03.268 00:37:03.268 ' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.268 --rc genhtml_branch_coverage=1 00:37:03.268 --rc genhtml_function_coverage=1 00:37:03.268 --rc genhtml_legend=1 00:37:03.268 --rc geninfo_all_blocks=1 00:37:03.268 --rc geninfo_unexecuted_blocks=1 00:37:03.268 00:37:03.268 ' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.268 --rc genhtml_branch_coverage=1 00:37:03.268 --rc genhtml_function_coverage=1 00:37:03.268 --rc genhtml_legend=1 00:37:03.268 --rc geninfo_all_blocks=1 00:37:03.268 --rc geninfo_unexecuted_blocks=1 00:37:03.268 00:37:03.268 ' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:03.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:03.268 --rc genhtml_branch_coverage=1 00:37:03.268 --rc genhtml_function_coverage=1 00:37:03.268 --rc genhtml_legend=1 00:37:03.268 --rc geninfo_all_blocks=1 00:37:03.268 --rc geninfo_unexecuted_blocks=1 00:37:03.268 00:37:03.268 ' 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.268 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:03.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:03.269 11:17:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:11.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:11.411 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:11.411 Found net devices under 0000:31:00.0: cvl_0_0 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:11.411 Found net devices under 0000:31:00.1: cvl_0_1 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.411 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:11.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:37:11.412 00:37:11.412 --- 10.0.0.2 ping statistics --- 00:37:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.412 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:37:11.412 00:37:11.412 --- 10.0.0.1 ping statistics --- 00:37:11.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.412 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.412 ************************************ 00:37:11.412 START TEST nvmf_digest_clean 00:37:11.412 ************************************ 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2113771 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2113771 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2113771 ']' 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:11.412 11:17:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.412 [2024-10-09 11:17:30.886488] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:11.412 [2024-10-09 11:17:30.886548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:11.412 [2024-10-09 11:17:31.027229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:11.412 [2024-10-09 11:17:31.058132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.412 [2024-10-09 11:17:31.074894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:11.412 [2024-10-09 11:17:31.074923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:11.412 [2024-10-09 11:17:31.074931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:11.412 [2024-10-09 11:17:31.074938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:11.412 [2024-10-09 11:17:31.074944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:11.412 [2024-10-09 11:17:31.075515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.984 null0 00:37:11.984 [2024-10-09 11:17:31.795377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:11.984 [2024-10-09 11:17:31.819522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2113836 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2113836 /var/tmp/bperf.sock 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2113836 ']' 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:11.984 11:17:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.984 [2024-10-09 11:17:31.877428] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:11.984 [2024-10-09 11:17:31.877488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113836 ] 00:37:12.245 [2024-10-09 11:17:32.007941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:12.245 [2024-10-09 11:17:32.057235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.245 [2024-10-09 11:17:32.075429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.816 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:12.816 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:12.816 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:12.816 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:12.816 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:13.076 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.076 11:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.336 nvme0n1 00:37:13.336 11:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:13.336 11:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.597 Running I/O for 2 seconds... 00:37:15.481 19233.00 IOPS, 75.13 MiB/s [2024-10-09T09:17:35.483Z] 19557.00 IOPS, 76.39 MiB/s 00:37:15.481 Latency(us) 00:37:15.481 [2024-10-09T09:17:35.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.481 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:15.481 nvme0n1 : 2.01 19562.72 76.42 0.00 0.00 6534.95 3106.56 21677.46 00:37:15.481 [2024-10-09T09:17:35.483Z] =================================================================================================================== 00:37:15.481 [2024-10-09T09:17:35.483Z] Total : 19562.72 76.42 0.00 0.00 6534.95 3106.56 21677.46 00:37:15.481 { 00:37:15.481 "results": [ 00:37:15.481 { 00:37:15.481 "job": "nvme0n1", 00:37:15.481 "core_mask": "0x2", 00:37:15.481 "workload": "randread", 00:37:15.481 "status": "finished", 00:37:15.481 "queue_depth": 128, 00:37:15.481 "io_size": 4096, 00:37:15.481 "runtime": 2.005089, 00:37:15.481 "iops": 19562.722652211447, 00:37:15.481 "mibps": 76.41688536020096, 00:37:15.481 "io_failed": 0, 00:37:15.481 "io_timeout": 0, 00:37:15.482 "avg_latency_us": 6534.947392907649, 00:37:15.482 "min_latency_us": 3106.5552956899433, 00:37:15.482 "max_latency_us": 21677.460741730705 00:37:15.482 } 00:37:15.482 ], 00:37:15.482 "core_count": 1 00:37:15.482 } 00:37:15.482 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:15.482 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:15.482 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:15.482 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:15.482 | select(.opcode=="crc32c") 00:37:15.482 | "\(.module_name) \(.executed)"' 00:37:15.482 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2113836 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2113836 ']' 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2113836 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2113836 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2113836' 00:37:15.743 killing process with pid 2113836 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2113836 00:37:15.743 Received shutdown signal, test time was about 2.000000 seconds 00:37:15.743 00:37:15.743 Latency(us) 00:37:15.743 [2024-10-09T09:17:35.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.743 [2024-10-09T09:17:35.745Z] =================================================================================================================== 00:37:15.743 [2024-10-09T09:17:35.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.743 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2113836 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2114681 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2114681 /var/tmp/bperf.sock 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2114681 ']' 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.007 11:17:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.007 [2024-10-09 11:17:35.824733] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:16.007 [2024-10-09 11:17:35.824786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114681 ] 00:37:16.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.007 Zero copy mechanism will not be used. 00:37:16.007 [2024-10-09 11:17:35.955091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:16.007 [2024-10-09 11:17:36.005106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.268 [2024-10-09 11:17:36.022529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:16.839 11:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:17.410 nvme0n1 00:37:17.410 11:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:17.410 11:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:17.410 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:17.410 Zero copy mechanism will not be used. 00:37:17.410 Running I/O for 2 seconds... 00:37:19.292 3410.00 IOPS, 426.25 MiB/s [2024-10-09T09:17:39.294Z] 3192.50 IOPS, 399.06 MiB/s 00:37:19.292 Latency(us) 00:37:19.292 [2024-10-09T09:17:39.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.292 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:19.292 nvme0n1 : 2.05 3127.43 390.93 0.00 0.00 5014.97 766.37 48610.06 00:37:19.292 [2024-10-09T09:17:39.294Z] =================================================================================================================== 00:37:19.292 [2024-10-09T09:17:39.294Z] Total : 3127.43 390.93 0.00 0.00 5014.97 766.37 48610.06 00:37:19.292 { 00:37:19.292 "results": [ 00:37:19.292 { 00:37:19.292 "job": "nvme0n1", 00:37:19.292 "core_mask": "0x2", 00:37:19.292 "workload": "randread", 00:37:19.292 "status": "finished", 00:37:19.292 "queue_depth": 16, 00:37:19.292 "io_size": 131072, 00:37:19.292 "runtime": 2.046729, 00:37:19.292 "iops": 3127.429180902797, 00:37:19.292 "mibps": 390.9286476128496, 00:37:19.292 "io_failed": 0, 00:37:19.292 "io_timeout": 0, 00:37:19.292 "avg_latency_us": 5014.972374482291, 00:37:19.292 "min_latency_us": 766.3748747076512, 00:37:19.292 "max_latency_us": 48610.06348145673 00:37:19.292 } 00:37:19.292 ], 00:37:19.292 "core_count": 1 00:37:19.292 } 00:37:19.292 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:19.292 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:19.292 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:19.292 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:19.292 | select(.opcode=="crc32c") 00:37:19.292 | "\(.module_name) \(.executed)"' 00:37:19.292 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2114681 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2114681 ']' 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2114681 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2114681 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2114681' 00:37:19.553 killing process with pid 2114681 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2114681 00:37:19.553 Received shutdown signal, test time was about 2.000000 seconds 00:37:19.553 00:37:19.553 Latency(us) 00:37:19.553 [2024-10-09T09:17:39.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.553 [2024-10-09T09:17:39.555Z] =================================================================================================================== 00:37:19.553 [2024-10-09T09:17:39.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:19.553 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2114681 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2115465 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2115465 /var/tmp/bperf.sock 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2115465 ']' 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:19.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:19.813 11:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:19.813 [2024-10-09 11:17:39.665086] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:19.813 [2024-10-09 11:17:39.665145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115465 ] 00:37:19.813 [2024-10-09 11:17:39.795242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:20.073 [2024-10-09 11:17:39.842130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.073 [2024-10-09 11:17:39.858239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:20.645 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:20.645 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:20.645 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:20.645 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:20.645 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:20.906 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:20.906 11:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:21.167 nvme0n1 00:37:21.167 11:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:21.167 11:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:21.167 Running I/O for 2 seconds... 00:37:23.504 21479.00 IOPS, 83.90 MiB/s [2024-10-09T09:17:43.506Z] 21574.00 IOPS, 84.27 MiB/s 00:37:23.504 Latency(us) 00:37:23.504 [2024-10-09T09:17:43.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.504 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:23.504 nvme0n1 : 2.01 21582.57 84.31 0.00 0.00 5922.81 1888.57 10893.47 00:37:23.504 [2024-10-09T09:17:43.506Z] =================================================================================================================== 00:37:23.504 [2024-10-09T09:17:43.506Z] Total : 21582.57 84.31 0.00 0.00 5922.81 1888.57 10893.47 00:37:23.504 { 00:37:23.504 "results": [ 00:37:23.504 { 00:37:23.504 "job": "nvme0n1", 00:37:23.504 "core_mask": "0x2", 00:37:23.504 "workload": "randwrite", 00:37:23.504 "status": "finished", 00:37:23.504 "queue_depth": 128, 00:37:23.504 "io_size": 4096, 00:37:23.504 "runtime": 2.005137, 00:37:23.504 "iops": 21582.56518133175, 00:37:23.504 "mibps": 84.30689523957714, 00:37:23.504 "io_failed": 0, 00:37:23.504 "io_timeout": 0, 00:37:23.504 "avg_latency_us": 5922.81067692626, 00:37:23.504 "min_latency_us": 1888.566655529569, 00:37:23.504 "max_latency_us": 10893.47143334447 00:37:23.504 } 00:37:23.504 ], 00:37:23.504 "core_count": 1 00:37:23.504 } 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:23.504 | select(.opcode=="crc32c") 00:37:23.504 | "\(.module_name) \(.executed)"' 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2115465 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2115465 ']' 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2115465 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2115465 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2115465' 00:37:23.504 killing process with pid 2115465 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2115465 00:37:23.504 Received shutdown signal, test time was about 2.000000 seconds 00:37:23.504 00:37:23.504 Latency(us) 00:37:23.504 [2024-10-09T09:17:43.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.504 [2024-10-09T09:17:43.506Z] =================================================================================================================== 00:37:23.504 [2024-10-09T09:17:43.506Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:23.504 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2115465 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2116166 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2116166 /var/tmp/bperf.sock 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2116166 ']' 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:23.765 11:17:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.765 [2024-10-09 11:17:43.561930] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:23.765 [2024-10-09 11:17:43.561988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116166 ] 00:37:23.765 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:23.765 Zero copy mechanism will not be used. 00:37:23.765 [2024-10-09 11:17:43.691976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:23.765 [2024-10-09 11:17:43.737577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.765 [2024-10-09 11:17:43.753678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:24.706 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:24.967 nvme0n1 00:37:24.967 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:24.967 11:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:24.967 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:24.967 Zero copy mechanism will not be used. 00:37:24.967 Running I/O for 2 seconds... 00:37:27.292 4300.00 IOPS, 537.50 MiB/s [2024-10-09T09:17:47.294Z] 3975.50 IOPS, 496.94 MiB/s 00:37:27.292 Latency(us) 00:37:27.292 [2024-10-09T09:17:47.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.293 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:27.293 nvme0n1 : 2.01 3970.89 496.36 0.00 0.00 4021.62 1820.14 8703.83 00:37:27.293 [2024-10-09T09:17:47.295Z] =================================================================================================================== 00:37:27.293 [2024-10-09T09:17:47.295Z] Total : 3970.89 496.36 0.00 0.00 4021.62 1820.14 8703.83 00:37:27.293 { 00:37:27.293 "results": [ 00:37:27.293 { 00:37:27.293 "job": "nvme0n1", 00:37:27.293 "core_mask": "0x2", 00:37:27.293 "workload": "randwrite", 00:37:27.293 "status": "finished", 00:37:27.293 "queue_depth": 16, 00:37:27.293 "io_size": 131072, 00:37:27.293 "runtime": 2.006353, 00:37:27.293 "iops": 3970.8864790991415, 00:37:27.293 "mibps": 496.3608098873927, 00:37:27.293 "io_failed": 0, 00:37:27.293 "io_timeout": 0, 00:37:27.293 "avg_latency_us": 4021.6240941427654, 00:37:27.293 "min_latency_us": 1820.1403274306715, 00:37:27.293 "max_latency_us": 8703.828934179754 00:37:27.293 } 00:37:27.293 ], 00:37:27.293 "core_count": 1 00:37:27.293 } 00:37:27.293 11:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:27.293 11:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:27.293 11:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:27.293 11:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:27.293 | select(.opcode=="crc32c") 00:37:27.293 | "\(.module_name) \(.executed)"' 00:37:27.293 11:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2116166 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2116166 ']' 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2116166 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2116166 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2116166' 00:37:27.293 killing process with pid 2116166 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2116166 00:37:27.293 Received shutdown signal, test time was about 2.000000 seconds 00:37:27.293 00:37:27.293 Latency(us) 00:37:27.293 [2024-10-09T09:17:47.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.293 [2024-10-09T09:17:47.295Z] =================================================================================================================== 00:37:27.293 [2024-10-09T09:17:47.295Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:27.293 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2116166 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2113771 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2113771 ']' 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2113771 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2113771 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2113771' 00:37:27.554 killing process with pid 2113771 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2113771 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2113771 00:37:27.554 00:37:27.554 real 0m16.661s 00:37:27.554 user 0m32.642s 00:37:27.554 sys 0m3.389s 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:27.554 ************************************ 00:37:27.554 END TEST nvmf_digest_clean 00:37:27.554 ************************************ 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:27.554 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.815 ************************************ 00:37:27.815 START TEST nvmf_digest_error 00:37:27.815 ************************************ 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2116878 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2116878 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2116878 ']' 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:27.815 11:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:27.815 [2024-10-09 11:17:47.626710] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:27.815 [2024-10-09 11:17:47.626761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:27.815 [2024-10-09 11:17:47.762802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:27.815 [2024-10-09 11:17:47.794271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.815 [2024-10-09 11:17:47.809544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:27.815 [2024-10-09 11:17:47.809575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:27.815 [2024-10-09 11:17:47.809584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:27.815 [2024-10-09 11:17:47.809591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:27.815 [2024-10-09 11:17:47.809597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:27.815 [2024-10-09 11:17:47.810138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 [2024-10-09 11:17:48.454488] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 null0 00:37:28.758 [2024-10-09 11:17:48.530074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.758 [2024-10-09 11:17:48.554215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2117189 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2117189 /var/tmp/bperf.sock 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2117189 ']' 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:28.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:28.758 11:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.758 [2024-10-09 11:17:48.611183] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:28.758 [2024-10-09 11:17:48.611232] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117189 ] 00:37:28.758 [2024-10-09 11:17:48.741226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:29.018 [2024-10-09 11:17:48.788111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.018 [2024-10-09 11:17:48.804412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.589 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.589 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:29.589 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.589 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:29.849 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:30.110 nvme0n1 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:30.110 11:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.110 Running I/O for 2 seconds... 00:37:30.110 [2024-10-09 11:17:50.078487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.110 [2024-10-09 11:17:50.078518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.110 [2024-10-09 11:17:50.078528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.110 [2024-10-09 11:17:50.089527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.110 [2024-10-09 11:17:50.089546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.110 [2024-10-09 11:17:50.089553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.110 [2024-10-09 11:17:50.103883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.110 [2024-10-09 11:17:50.103900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.110 [2024-10-09 11:17:50.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.115966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.115983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.115991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.126046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.126064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.126071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.139607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.139624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.139631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.155574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.155591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.155598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.166184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.166201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.166208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.178590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.178608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.178614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.191410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.191427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.191434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.204862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.204879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.204885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.216218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.216236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.216242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.230093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.230111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.230118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.244108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.372 [2024-10-09 11:17:50.244125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.372 [2024-10-09 11:17:50.244132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.372 [2024-10-09 11:17:50.253559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.253576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.253582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.266784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.266802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.266811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.279272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.279290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.279297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.293124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.293141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.293147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.306857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.306874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.306880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.318912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.318929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.318935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.329459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.329480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.329486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.343616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.343634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.343641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.356885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.356902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.356908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.373 [2024-10-09 11:17:50.370152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.373 [2024-10-09 11:17:50.370169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.373 [2024-10-09 11:17:50.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.634 [2024-10-09 11:17:50.383056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.634 [2024-10-09 11:17:50.383074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.634 [2024-10-09 11:17:50.383080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.634 [2024-10-09 11:17:50.395867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.634 [2024-10-09 11:17:50.395885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.395892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.407354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.407378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.420473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.420489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.420496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.431653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.431670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.431677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.445437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.445455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.445462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.458688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.458705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.458712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.470669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.470685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.470692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.482587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.482604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.482614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.495693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.495710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.495716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.509531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.509554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.519573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.519590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.519596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.532661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.532679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.532685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.544311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.544327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.544334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.556994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.557010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.557017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.570363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.570380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.583747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.583764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.583771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.596558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.596577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.596584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.609699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.609715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.609722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.621293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.621310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.621316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.635 [2024-10-09 11:17:50.634333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.635 [2024-10-09 11:17:50.634350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.635 [2024-10-09 11:17:50.634356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.645626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.645642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.896 [2024-10-09 11:17:50.645649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.659427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.659444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.896 [2024-10-09 11:17:50.659450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.670799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.670816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.896 [2024-10-09 11:17:50.670822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.684212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.684228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.896 [2024-10-09 11:17:50.684235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.696627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.896 [2024-10-09 11:17:50.696650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.896 [2024-10-09 11:17:50.708735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.896 [2024-10-09 11:17:50.708752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.708759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.719802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.719818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.734231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.734249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.747326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.747344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.747350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.761747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.761764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.761770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.772921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.772937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.772944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.785828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.785845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.785852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.797268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.797285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.797291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.811167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.811184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.811193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.824151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.824167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.824173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.835235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.835252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.835259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.848722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.848740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.848746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.860800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.860817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.860824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.872801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.872819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.872825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:30.897 [2024-10-09 11:17:50.884510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:30.897 [2024-10-09 11:17:50.884527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.897 [2024-10-09 11:17:50.884534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.899329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.899345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.899352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.910159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.910177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.910183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.923661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.923678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.923685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.934846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.934863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.948359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.948377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.948383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.960347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.960365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.960371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.973332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.973358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.986131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.986148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.986155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:50.999725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:50.999742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:50.999749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.011849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.011866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.011873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.024684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.024701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.034937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.034954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.034961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.049592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.049616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 20105.00 IOPS, 78.54 MiB/s [2024-10-09T09:17:51.160Z] [2024-10-09 11:17:51.063497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.063515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.063521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.074672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.074689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.074696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.085640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.085659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.085665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.099226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.099244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.099251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.112478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.112496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.112503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.125504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.125522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.125529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.137720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.137740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.137747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.158 [2024-10-09 11:17:51.150824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.158 [2024-10-09 11:17:51.150842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.158 [2024-10-09 11:17:51.150848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.162109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.162127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.162133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.173717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.173735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.173741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.187090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.187108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.187114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.200677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.200694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.200701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.210590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.210607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.210613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.225762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.225779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.225786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.238287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.238303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.238310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.252031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.252055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.263803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.263819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.263826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.277023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.277040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.277046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.288183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.288207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.301303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.301320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.301327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.314419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.314436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.314442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.325909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.325926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.325932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.340368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.340384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.340391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.350137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.350157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.350164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.363671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.363695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.377800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.377817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.377823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.390342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.390359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.390366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.400821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.419 [2024-10-09 11:17:51.400839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.419 [2024-10-09 11:17:51.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.419 [2024-10-09 11:17:51.413924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.420 [2024-10-09 11:17:51.413941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.420 [2024-10-09 11:17:51.413947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.426646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.426665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.426672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.439212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.439230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.439236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.451100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.451117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.451124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.465273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.465291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.465298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.475934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.475959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.488751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.488768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.488775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.502724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.502742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.502748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.516554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.516572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.516579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.527985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.528002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.528009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.540126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.540144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.540151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.553146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.553163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.553170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.566306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.566323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.566333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.579962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.579987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.591970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.591988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.591995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.602997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.603014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.603021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.615217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.615234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.615241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.628933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.628957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.642011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.642030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.642036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.652949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.652973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.665664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.665682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.665689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.681 [2024-10-09 11:17:51.678827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.681 [2024-10-09 11:17:51.678848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.681 [2024-10-09 11:17:51.678855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.691369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.691387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.691394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.703725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.703743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.703750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.717749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.717768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.717774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.730240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.730259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.730265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.742539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.742557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.742564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.754228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.754245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.754252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.767247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.767265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.767271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.780489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.780507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.780513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.792528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.792545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.792551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.804012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.804029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.804036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.817674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.817692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.817698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.831130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.831147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.831154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.842891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.842909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.842915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.853535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.853553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.853559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.866872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.866889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.866896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.880859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.880877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.880883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.892335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.892352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.892362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.903810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.903828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.903835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.918156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.918173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.929806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.929824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.929831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.944 [2024-10-09 11:17:51.942436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:31.944 [2024-10-09 11:17:51.942454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.944 [2024-10-09 11:17:51.942461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:51.955019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:51.955037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:51.955044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:51.968390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:51.968408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:51.968415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:51.978960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:51.978978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:51.978985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:51.992135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:51.992153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:51.992159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:52.005611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:52.005629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:52.005636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:52.018147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:52.018165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:52.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:52.031113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:52.031131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:52.031138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:52.041318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:52.041336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:52.041343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 [2024-10-09 11:17:52.055680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f24840) 00:37:32.206 [2024-10-09 11:17:52.055698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.206 [2024-10-09 11:17:52.055705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.206 20188.00 IOPS, 78.86 MiB/s 00:37:32.206 Latency(us) 00:37:32.206 [2024-10-09T09:17:52.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.206 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:32.206 nvme0n1 : 2.00 20214.73 78.96 0.00 0.00 6325.36 2230.70 18174.03 00:37:32.206 [2024-10-09T09:17:52.208Z] =================================================================================================================== 00:37:32.206 [2024-10-09T09:17:52.208Z] Total : 20214.73 78.96 0.00 0.00 6325.36 2230.70 18174.03 00:37:32.206 { 00:37:32.206 "results": [ 00:37:32.206 { 00:37:32.206 "job": "nvme0n1", 00:37:32.206 "core_mask": "0x2", 00:37:32.206 "workload": "randread", 00:37:32.206 "status": "finished", 00:37:32.206 "queue_depth": 128, 00:37:32.206 "io_size": 4096, 00:37:32.206 "runtime": 2.003687, 00:37:32.206 "iops": 20214.734137617303, 00:37:32.206 "mibps": 78.96380522506759, 00:37:32.206 "io_failed": 0, 00:37:32.206 "io_timeout": 0, 00:37:32.206 "avg_latency_us": 6325.35754273963, 00:37:32.206 "min_latency_us": 2230.698296024056, 00:37:32.206 "max_latency_us": 18174.032743067157 00:37:32.206 } 00:37:32.206 ], 00:37:32.206 "core_count": 1 00:37:32.206 } 00:37:32.206 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:32.206 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:32.206 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:32.206 | .driver_specific 00:37:32.206 | .nvme_error 00:37:32.206 | .status_code 00:37:32.206 | .command_transient_transport_error' 00:37:32.206 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2117189 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2117189 ']' 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2117189 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2117189 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2117189' 00:37:32.467 killing process with pid 2117189 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2117189 00:37:32.467 Received shutdown signal, test time was about 2.000000 seconds 00:37:32.467 00:37:32.467 Latency(us) 00:37:32.467 [2024-10-09T09:17:52.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.467 [2024-10-09T09:17:52.469Z] =================================================================================================================== 00:37:32.467 [2024-10-09T09:17:52.469Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2117189 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2117911 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2117911 /var/tmp/bperf.sock 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2117911 ']' 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:32.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:32.467 11:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.728 [2024-10-09 11:17:52.474432] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:32.728 [2024-10-09 11:17:52.474495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117911 ] 00:37:32.728 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:32.728 Zero copy mechanism will not be used. 00:37:32.728 [2024-10-09 11:17:52.604563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:32.728 [2024-10-09 11:17:52.651732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.728 [2024-10-09 11:17:52.667905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.299 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:33.299 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:33.299 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.299 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.561 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:33.822 nvme0n1 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:34.084 11:17:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:34.084 Zero copy mechanism will not be used. 00:37:34.084 Running I/O for 2 seconds... 00:37:34.084 [2024-10-09 11:17:53.936913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.936945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.936955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.947426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.947448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.947456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.957332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.957356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.957363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.968280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.968299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.968305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.978966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.978985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.978992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.989444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.989462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.989474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:53.998043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:53.998062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:53.998068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.008672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.008690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.008697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.020861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.020880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.020887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.031394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.031412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.031419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.042518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.042536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.042545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.051957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.051975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.051982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.062183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.062200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.062206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.072695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.072713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.072719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.084 [2024-10-09 11:17:54.083665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.084 [2024-10-09 11:17:54.083683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.084 [2024-10-09 11:17:54.083689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.093928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.093946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.093952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.102292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.102317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.110731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.110749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.110756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.121718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.121735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.121742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.130177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.130201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.130208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.141830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.141848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.141855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.149119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.149143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.156754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.156771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.156778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.163791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.163809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.163816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.172913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.172937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.183554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.183572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.183578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.193338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.193355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.193361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.203291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.203308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.203315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.213405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.213422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.224327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.224345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.224351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.235103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.235121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.235127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.246502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.246519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.246526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.256164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.256181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.256188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.266078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.266096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.266102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.277520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.277537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.277544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.286664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.286682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.286688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.296260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.346 [2024-10-09 11:17:54.296278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.346 [2024-10-09 11:17:54.296288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.346 [2024-10-09 11:17:54.306128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.347 [2024-10-09 11:17:54.306147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.347 [2024-10-09 11:17:54.306153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.347 [2024-10-09 11:17:54.316437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.347 [2024-10-09 11:17:54.316454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.347 [2024-10-09 11:17:54.316460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.347 [2024-10-09 11:17:54.325926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.347 [2024-10-09 11:17:54.325943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.347 [2024-10-09 11:17:54.325950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.347 [2024-10-09 11:17:54.336138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.347 [2024-10-09 11:17:54.336156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.347 [2024-10-09 11:17:54.336162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.347 [2024-10-09 11:17:54.346162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.347 [2024-10-09 11:17:54.346180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.347 [2024-10-09 11:17:54.346187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.356291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.356310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.356316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.367146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.367164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.367170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.379141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.379159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.391456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.391482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.391488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.401582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.401599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.401606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.412109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.412127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.412134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.420993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.421011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.421017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.428026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.428045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.428051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.436672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.608 [2024-10-09 11:17:54.436690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.608 [2024-10-09 11:17:54.436696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.608 [2024-10-09 11:17:54.447483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.447501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.458131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.458148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.458154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.469504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.469521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.469531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.479518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.479536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.479542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.488398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.488416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.488423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.497034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.497051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.497058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.507583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.507601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.507608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.519510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.519528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.519534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.528860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.528878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.528884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.539878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.539896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.539903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.550846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.550864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.550870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.561109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.561129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.561136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.570797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.570814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.570820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.582585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.582603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.582610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.591194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.591212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.591219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.609 [2024-10-09 11:17:54.601190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.609 [2024-10-09 11:17:54.601208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.609 [2024-10-09 11:17:54.601214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.610188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.610207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.610213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.619791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.619810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.619816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.630773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.630792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.630799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.639992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.640011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.640017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.650662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.650680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.650687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.660962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.660980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.660986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.672107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.672126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.672132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.684401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.684420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.684427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.693729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.693747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.693754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.703345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.703363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.703370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.714216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.714235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.714242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.723801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.723819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.723826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.732420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.732449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.743729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.743747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.752857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.752875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.752882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.764150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.764168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.764175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.774904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.871 [2024-10-09 11:17:54.774930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.871 [2024-10-09 11:17:54.785114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.871 [2024-10-09 11:17:54.785133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.785139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.795472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.795490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.795497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.805483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.805501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.805507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.816356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.816375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.816381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.827663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.827685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.827691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.837390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.837415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.848112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.848131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.848138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:34.872 [2024-10-09 11:17:54.860700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:34.872 [2024-10-09 11:17:54.860718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.872 [2024-10-09 11:17:54.860725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.872075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.872095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.872101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.883355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.883373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.883380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.893834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.893853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.893860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.903513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.903532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.903538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.911477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.911496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 3030.00 IOPS, 378.75 MiB/s [2024-10-09T09:17:55.136Z] [2024-10-09 11:17:54.923087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.923106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.923112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.933704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.933722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.933729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.943303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.943321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.943328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.953927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.953945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.953951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.963495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.963513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.963519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.974054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.974073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.974079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.986183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.986203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.986210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:54.999097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:54.999115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:54.999121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.009883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.009905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.009911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.021199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.021218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.021225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.031191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.031211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.031217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.041707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.041725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.041732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.051334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.051354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.051360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.061594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.061613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.061620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.070547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.070566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.070573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.080609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.080628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.080635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.091618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.091637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.091643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.134 [2024-10-09 11:17:55.099955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.134 [2024-10-09 11:17:55.099974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.134 [2024-10-09 11:17:55.099980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.135 [2024-10-09 11:17:55.111698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.135 [2024-10-09 11:17:55.111717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.135 [2024-10-09 11:17:55.111724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.135 [2024-10-09 11:17:55.123800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.135 [2024-10-09 11:17:55.123819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.135 [2024-10-09 11:17:55.123825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.135 [2024-10-09 11:17:55.132807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.135 [2024-10-09 11:17:55.132826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.135 [2024-10-09 11:17:55.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.396 [2024-10-09 11:17:55.142575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.396 [2024-10-09 11:17:55.142594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.396 [2024-10-09 11:17:55.142601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.396 [2024-10-09 11:17:55.152269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.152287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.152294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.161750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.161769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.161776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.173698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.173717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.173724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.183355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.183374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.183384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.193417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.193435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.193442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.205436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.205455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.205461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.218150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.218169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.218175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.231245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.231264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.231270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.243677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.243696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.243703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.253715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.253733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.253740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.264666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.264685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.264692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.272269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.272287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.272294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.280456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.280483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.280489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.290105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.290124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.290131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.300391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.300409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.300415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.309269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.309287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.309294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.320364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.320390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.329102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.329121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.329128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.340028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.340047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.340054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.349669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.349688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.360438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.360457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.360469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.371498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.371517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.371523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.382201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.382220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.382227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.397 [2024-10-09 11:17:55.392942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.397 [2024-10-09 11:17:55.392961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.397 [2024-10-09 11:17:55.392968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.401989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.402008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.412324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.412342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.412348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.422807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.422826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.422833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.432825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.432844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.432850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.442265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.442284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.442290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.453131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.453150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.453163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.461265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.461283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.461290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.472125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.472144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.659 [2024-10-09 11:17:55.472150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.659 [2024-10-09 11:17:55.481968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.659 [2024-10-09 11:17:55.481986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.481993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.493399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.493418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.493425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.505640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.505659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.505666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.516675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.516694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.516701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.527608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.527626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.527633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.535926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.535944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.535950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.542932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.542954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.542961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.552541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.552559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.552566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.562083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.562101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.562108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.572097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.572115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.572122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.580768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.580787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.580794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.590270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.590288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.599883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.599901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.599908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.605473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.605491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.605497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.608884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.608903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.608913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.616169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.616188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.616194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.626561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.626580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.626587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.635635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.635654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.635661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.646060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.646079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.646086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.660 [2024-10-09 11:17:55.656655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.660 [2024-10-09 11:17:55.656673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.660 [2024-10-09 11:17:55.656680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.667645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.667664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.667671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.677585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.677604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.677611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.688635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.688654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.688661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.698615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.698637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.698643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.707313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.707332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.707338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.715545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.715564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.715571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.724876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.724895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.724902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.734751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.734770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.922 [2024-10-09 11:17:55.734777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.922 [2024-10-09 11:17:55.745187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.922 [2024-10-09 11:17:55.745206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.745212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.755500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.755519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.755526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.766364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.766388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.775677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.775695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.775702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.786759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.786778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.786785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.794959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.794978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.794984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.803082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.803101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.803108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.811032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.811050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.811057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.821418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.821437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.821445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.832929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.832955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.843033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.843051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.843058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.853356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.853375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.853382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.864790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.864808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.864817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.876373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.876391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.876397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.886675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.886694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.886700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.896484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.896502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.896508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.906981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.906999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.907006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:35.923 [2024-10-09 11:17:55.917868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f04d00) 00:37:35.923 [2024-10-09 11:17:55.917886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:35.923 [2024-10-09 11:17:55.917893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:36.183 3059.50 IOPS, 382.44 MiB/s 00:37:36.183 Latency(us) 00:37:36.183 [2024-10-09T09:17:56.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:36.183 nvme0n1 : 2.05 2997.04 374.63 0.00 0.00 5235.12 992.18 47953.17 00:37:36.183 [2024-10-09T09:17:56.185Z] =================================================================================================================== 00:37:36.183 [2024-10-09T09:17:56.185Z] Total : 2997.04 374.63 0.00 0.00 5235.12 992.18 47953.17 00:37:36.183 { 00:37:36.183 "results": [ 00:37:36.183 { 00:37:36.183 "job": "nvme0n1", 00:37:36.183 "core_mask": "0x2", 00:37:36.183 "workload": "randread", 00:37:36.183 "status": "finished", 00:37:36.183 "queue_depth": 16, 00:37:36.183 "io_size": 131072, 00:37:36.183 "runtime": 2.047018, 00:37:36.183 "iops": 2997.0425272274106, 00:37:36.183 "mibps": 374.6303159034263, 00:37:36.183 "io_failed": 0, 00:37:36.183 "io_timeout": 0, 00:37:36.183 "avg_latency_us": 5235.121023218806, 00:37:36.183 "min_latency_us": 992.1817574340128, 00:37:36.183 "max_latency_us": 47953.170731707316 00:37:36.183 } 00:37:36.183 ], 00:37:36.183 "core_count": 1 00:37:36.183 } 00:37:36.183 11:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:36.183 11:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:36.183 11:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:36.183 | .driver_specific 00:37:36.183 | .nvme_error 00:37:36.183 | .status_code 00:37:36.183 | .command_transient_transport_error' 00:37:36.183 11:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2117911 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2117911 ']' 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2117911 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:36.183 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2117911 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2117911' 00:37:36.443 killing process with pid 2117911 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2117911 00:37:36.443 Received shutdown signal, test time was about 2.000000 seconds 00:37:36.443 00:37:36.443 Latency(us) 00:37:36.443 [2024-10-09T09:17:56.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:36.443 [2024-10-09T09:17:56.445Z] =================================================================================================================== 00:37:36.443 [2024-10-09T09:17:56.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2117911 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2118596 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2118596 /var/tmp/bperf.sock 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2118596 ']' 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:36.443 11:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.443 [2024-10-09 11:17:56.377258] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:36.443 [2024-10-09 11:17:56.377316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118596 ] 00:37:36.703 [2024-10-09 11:17:56.507607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:36.703 [2024-10-09 11:17:56.554543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.703 [2024-10-09 11:17:56.568719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.274 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:37.274 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:37.274 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:37.274 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:37.534 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:37.794 nvme0n1 00:37:37.794 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:37.794 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.794 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.054 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.054 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:38.055 11:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:38.055 Running I/O for 2 seconds... 00:37:38.055 [2024-10-09 11:17:57.905444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eb760 00:37:38.055 [2024-10-09 11:17:57.907093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.907123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.915424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f2d80 00:37:38.055 [2024-10-09 11:17:57.916520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.928155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f2d80 00:37:38.055 [2024-10-09 11:17:57.929263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.929282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.941683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f2d80 00:37:38.055 [2024-10-09 11:17:57.943429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.943446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.951360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:57.952436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.952453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.964084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:57.965131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.965147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.976010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:57.977099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.987987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:57.989073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:57.989090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:57.999907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:58.000992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:58.001009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:58.011820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:58.012903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:58.012920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:58.023744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:58.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:58.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:58.035661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.055 [2024-10-09 11:17:58.036708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:58.036724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.055 [2024-10-09 11:17:58.049118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e1710 00:37:38.055 [2024-10-09 11:17:58.050983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.055 [2024-10-09 11:17:58.050999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:38.315 [2024-10-09 11:17:58.059642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f35f0 00:37:38.315 [2024-10-09 11:17:58.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.060689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.071541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e6300 00:37:38.316 [2024-10-09 11:17:58.072587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.072603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.083490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e5220 00:37:38.316 [2024-10-09 11:17:58.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.084548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.096922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f35f0 00:37:38.316 [2024-10-09 11:17:58.098625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.098641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.108798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7da8 00:37:38.316 [2024-10-09 11:17:58.110487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.119278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f46d0 00:37:38.316 [2024-10-09 11:17:58.120331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.120347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.131227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f57b0 00:37:38.316 [2024-10-09 11:17:58.132286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.132306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.144706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e5220 00:37:38.316 [2024-10-09 11:17:58.146397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.146413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.155069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:38.316 [2024-10-09 11:17:58.156111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.156127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.166998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:38.316 [2024-10-09 11:17:58.168039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.168054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.180439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:38.316 [2024-10-09 11:17:58.182122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.182138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.190795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e5a90 00:37:38.316 [2024-10-09 11:17:58.191817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.202671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.203696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.214586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.215602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.215619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.226505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.227522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.227538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.238407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.239402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.239419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.250332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.251355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.251371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.262233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.263264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.274145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.275157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.275174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.286058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.287075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.287092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.297974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ee5c8 00:37:38.316 [2024-10-09 11:17:58.298954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.298970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.316 [2024-10-09 11:17:58.309907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8a50 00:37:38.316 [2024-10-09 11:17:58.310911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.316 [2024-10-09 11:17:58.310928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.321883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7970 00:37:38.577 [2024-10-09 11:17:58.322881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.322897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.333847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f6890 00:37:38.577 [2024-10-09 11:17:58.334851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.334867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.347551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e1710 00:37:38.577 [2024-10-09 11:17:58.349210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.349227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.357914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:38.577 [2024-10-09 11:17:58.358927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.358944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.369848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:38.577 [2024-10-09 11:17:58.370862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.370878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.381764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:38.577 [2024-10-09 11:17:58.382777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.382793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.393681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:38.577 [2024-10-09 11:17:58.394693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.394710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.405619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:38.577 [2024-10-09 11:17:58.406611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.406627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.417516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f6020 00:37:38.577 [2024-10-09 11:17:58.418526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.418542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.429481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7100 00:37:38.577 [2024-10-09 11:17:58.430487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.430504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.440568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7970 00:37:38.577 [2024-10-09 11:17:58.441518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.441538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.455366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ec408 00:37:38.577 [2024-10-09 11:17:58.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.457193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.465737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f6890 00:37:38.577 [2024-10-09 11:17:58.466859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.466876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.477644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fef90 00:37:38.577 [2024-10-09 11:17:58.478779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.478795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.491147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fe2e8 00:37:38.577 [2024-10-09 11:17:58.492921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.492938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.501577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9b30 00:37:38.577 [2024-10-09 11:17:58.502699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.502716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.515038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ebb98 00:37:38.577 [2024-10-09 11:17:58.516814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.516831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.524665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ec408 00:37:38.577 [2024-10-09 11:17:58.525799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.525815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.536533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f6890 00:37:38.577 [2024-10-09 11:17:58.537654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.537671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.549256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166de038 00:37:38.577 [2024-10-09 11:17:58.550414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.577 [2024-10-09 11:17:58.550431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:38.577 [2024-10-09 11:17:58.560453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f92c0 00:37:38.578 [2024-10-09 11:17:58.561579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.578 [2024-10-09 11:17:58.561596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:38.578 [2024-10-09 11:17:58.574750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fa3a0 00:37:38.578 [2024-10-09 11:17:58.576518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.578 [2024-10-09 11:17:58.576534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:38.838 [2024-10-09 11:17:58.585522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ed0b0 00:37:38.838 [2024-10-09 11:17:58.586780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.838 [2024-10-09 11:17:58.586796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:38.838 [2024-10-09 11:17:58.599180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eea00 00:37:38.838 [2024-10-09 11:17:58.601131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.838 [2024-10-09 11:17:58.601147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:38.838 [2024-10-09 11:17:58.609562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.838 [2024-10-09 11:17:58.610826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.838 [2024-10-09 11:17:58.610843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.621504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.839 [2024-10-09 11:17:58.622800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.622817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.633423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.839 [2024-10-09 11:17:58.634711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.634727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.645343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.839 [2024-10-09 11:17:58.646636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.646653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.657259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.839 [2024-10-09 11:17:58.658542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.658558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.668340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e88f8 00:37:38.839 [2024-10-09 11:17:58.669614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.669630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.681036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e88f8 00:37:38.839 [2024-10-09 11:17:58.682272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.682289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.692958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f3a28 00:37:38.839 [2024-10-09 11:17:58.694219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.704929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fbcf0 00:37:38.839 [2024-10-09 11:17:58.706209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.706226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.716113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e01f8 00:37:38.839 [2024-10-09 11:17:58.717339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.717356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.728851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e9168 00:37:38.839 [2024-10-09 11:17:58.730138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.730155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.740044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f3a28 00:37:38.839 [2024-10-09 11:17:58.741401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.741419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.752900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f4b08 00:37:38.839 [2024-10-09 11:17:58.754160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.754180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.764780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e8088 00:37:38.839 [2024-10-09 11:17:58.766040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.775914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f4298 00:37:38.839 [2024-10-09 11:17:58.777113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.777129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.790737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f3e60 00:37:38.839 [2024-10-09 11:17:58.792811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.792827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.801093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:38.839 [2024-10-09 11:17:58.802501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.802517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.813026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:38.839 [2024-10-09 11:17:58.814438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.814455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.824983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:38.839 [2024-10-09 11:17:58.826396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.826413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:38.839 [2024-10-09 11:17:58.836918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:38.839 [2024-10-09 11:17:58.838321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:38.839 [2024-10-09 11:17:58.838337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.848832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:39.100 [2024-10-09 11:17:58.850252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.850269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.860751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:39.100 [2024-10-09 11:17:58.862166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.862183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.872719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:39.100 [2024-10-09 11:17:58.874132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.874148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.884697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:39.100 [2024-10-09 11:17:58.886112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.886129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 21219.00 IOPS, 82.89 MiB/s [2024-10-09T09:17:59.102Z] [2024-10-09 11:17:58.896616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.100 [2024-10-09 11:17:58.898021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.898038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.908568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ddc00 00:37:39.100 [2024-10-09 11:17:58.909962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.909978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.920524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f46d0 00:37:39.100 [2024-10-09 11:17:58.921955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.921971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.932508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.100 [2024-10-09 11:17:58.933872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.933889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.944470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ed4e8 00:37:39.100 [2024-10-09 11:17:58.945908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.945925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.956483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e7818 00:37:39.100 [2024-10-09 11:17:58.957898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.957915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.969964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e88f8 00:37:39.100 [2024-10-09 11:17:58.972000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.972016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.980318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ed4e8 00:37:39.100 [2024-10-09 11:17:58.981683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.981700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:58.993752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f4b08 00:37:39.100 [2024-10-09 11:17:58.995778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.100 [2024-10-09 11:17:58.995795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:39.100 [2024-10-09 11:17:59.004106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.100 [2024-10-09 11:17:59.005506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.005524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.016070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.017446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.017462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.028025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.029404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.029421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.039951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.041327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.041343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.051875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.053245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.053261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.063800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.065180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.065200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.075722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.077100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.077117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.087649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.089006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.089023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.101 [2024-10-09 11:17:59.099578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8e88 00:37:39.101 [2024-10-09 11:17:59.100962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.101 [2024-10-09 11:17:59.100979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.110679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f8618 00:37:39.363 [2024-10-09 11:17:59.112040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.112057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.123348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e99d8 00:37:39.363 [2024-10-09 11:17:59.124688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.124704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.135291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e4140 00:37:39.363 [2024-10-09 11:17:59.136670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.136686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.146484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eff18 00:37:39.363 [2024-10-09 11:17:59.147828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.147844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.160696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eff18 00:37:39.363 [2024-10-09 11:17:59.162666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.162683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.171058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f0788 00:37:39.363 [2024-10-09 11:17:59.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.172381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.183026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f57b0 00:37:39.363 [2024-10-09 11:17:59.184338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.184355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.196568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e4578 00:37:39.363 [2024-10-09 11:17:59.198530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.198546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.206968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fe720 00:37:39.363 [2024-10-09 11:17:59.208306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.208323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.218945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fd640 00:37:39.363 [2024-10-09 11:17:59.220284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.220301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.232475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7970 00:37:39.363 [2024-10-09 11:17:59.234490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.234506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.242879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.363 [2024-10-09 11:17:59.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.244218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.254798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.363 [2024-10-09 11:17:59.256116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.256133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.266744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.363 [2024-10-09 11:17:59.268066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.268082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.278651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7100 00:37:39.363 [2024-10-09 11:17:59.279982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.279999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.292160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fd640 00:37:39.363 [2024-10-09 11:17:59.294131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.294147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.302527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f1ca0 00:37:39.363 [2024-10-09 11:17:59.303845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.303862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.314420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fef90 00:37:39.363 [2024-10-09 11:17:59.315750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.315767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.325591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df550 00:37:39.363 [2024-10-09 11:17:59.326896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.326912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.340101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f7970 00:37:39.363 [2024-10-09 11:17:59.342068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.342085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.350482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.363 [2024-10-09 11:17:59.351802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.351818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.363 [2024-10-09 11:17:59.362398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.363 [2024-10-09 11:17:59.363695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.363 [2024-10-09 11:17:59.363712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.624 [2024-10-09 11:17:59.374337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.624 [2024-10-09 11:17:59.375658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.624 [2024-10-09 11:17:59.375678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.624 [2024-10-09 11:17:59.386256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.624 [2024-10-09 11:17:59.387527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.624 [2024-10-09 11:17:59.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.624 [2024-10-09 11:17:59.398180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f5be8 00:37:39.624 [2024-10-09 11:17:59.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.624 [2024-10-09 11:17:59.399505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:39.624 [2024-10-09 11:17:59.410157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fe2e8 00:37:39.625 [2024-10-09 11:17:59.411471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.411488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.422076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f5be8 00:37:39.625 [2024-10-09 11:17:59.423365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.423382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.433221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.625 [2024-10-09 11:17:59.434503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.445928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.625 [2024-10-09 11:17:59.447213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.447229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.457830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.625 [2024-10-09 11:17:59.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.459105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.469762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.625 [2024-10-09 11:17:59.471052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.471069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.481668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eee38 00:37:39.625 [2024-10-09 11:17:59.482959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.482976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.492778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e01f8 00:37:39.625 [2024-10-09 11:17:59.494032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.494048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.505487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.625 [2024-10-09 11:17:59.506767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.506783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.516610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166dece0 00:37:39.625 [2024-10-09 11:17:59.517870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.517886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.530810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166dece0 00:37:39.625 [2024-10-09 11:17:59.532725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.532742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.541183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166efae0 00:37:39.625 [2024-10-09 11:17:59.542438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.542455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.553093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166efae0 00:37:39.625 [2024-10-09 11:17:59.554296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.564970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.625 [2024-10-09 11:17:59.566212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.566229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.576896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.625 [2024-10-09 11:17:59.578149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.578165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.588850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.625 [2024-10-09 11:17:59.590080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.590096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.600773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.625 [2024-10-09 11:17:59.602019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.602035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.612710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.625 [2024-10-09 11:17:59.613958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.625 [2024-10-09 11:17:59.613974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.625 [2024-10-09 11:17:59.624602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.886 [2024-10-09 11:17:59.625852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.625869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.636502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.886 [2024-10-09 11:17:59.637751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.637767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.648413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.886 [2024-10-09 11:17:59.649744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.649760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.660410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166df988 00:37:39.886 [2024-10-09 11:17:59.661683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.661699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.672353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166fac10 00:37:39.886 [2024-10-09 11:17:59.673618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.673634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.684325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166de470 00:37:39.886 [2024-10-09 11:17:59.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.886 [2024-10-09 11:17:59.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:39.886 [2024-10-09 11:17:59.696244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f1430 00:37:39.886 [2024-10-09 11:17:59.697455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.697473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.709729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166de8a8 00:37:39.887 [2024-10-09 11:17:59.711612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.720106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f46d0 00:37:39.887 [2024-10-09 11:17:59.721295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.721312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.731412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e6300 00:37:39.887 [2024-10-09 11:17:59.732625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.732641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.744503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166eb760 00:37:39.887 [2024-10-09 11:17:59.745898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.745915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.758052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166ea680 00:37:39.887 [2024-10-09 11:17:59.760092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.769903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f20d8 00:37:39.887 [2024-10-09 11:17:59.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.771922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.779504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.780879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.780895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.792177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.793573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.793589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.804108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.805491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.805507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.816031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.817404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.817420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.827949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.829324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.829342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.839873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.841254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.841272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.851849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.853246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.853263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.863801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.865184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.865200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:39.887 [2024-10-09 11:17:59.877254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166f9f68 00:37:39.887 [2024-10-09 11:17:59.879278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:39.887 [2024-10-09 11:17:59.879294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:40.147 [2024-10-09 11:17:59.889131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616290) with pdu=0x2000166e0630 00:37:40.147 21286.00 IOPS, 83.15 MiB/s [2024-10-09T09:18:00.149Z] [2024-10-09 11:17:59.891142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:40.148 [2024-10-09 11:17:59.891157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:40.148 00:37:40.148 Latency(us) 00:37:40.148 [2024-10-09T09:18:00.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.148 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:40.148 nvme0n1 : 2.00 21309.31 83.24 0.00 0.00 6001.27 2285.44 14342.16 00:37:40.148 [2024-10-09T09:18:00.150Z] =================================================================================================================== 00:37:40.148 [2024-10-09T09:18:00.150Z] Total : 21309.31 83.24 0.00 0.00 6001.27 2285.44 14342.16 00:37:40.148 { 00:37:40.148 "results": [ 00:37:40.148 { 00:37:40.148 "job": "nvme0n1", 00:37:40.148 "core_mask": "0x2", 00:37:40.148 "workload": "randwrite", 00:37:40.148 "status": "finished", 00:37:40.148 "queue_depth": 128, 00:37:40.148 "io_size": 4096, 00:37:40.148 "runtime": 2.003819, 00:37:40.148 "iops": 21309.309872797894, 00:37:40.148 "mibps": 83.23949169061677, 00:37:40.148 "io_failed": 0, 00:37:40.148 "io_timeout": 0, 00:37:40.148 "avg_latency_us": 6001.270371538273, 00:37:40.148 "min_latency_us": 2285.439358503174, 00:37:40.148 "max_latency_us": 14342.158369528901 00:37:40.148 } 00:37:40.148 ], 00:37:40.148 "core_count": 1 00:37:40.148 } 00:37:40.148 11:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:40.148 11:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:40.148 11:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:40.148 11:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:40.148 | .driver_specific 00:37:40.148 | .nvme_error 00:37:40.148 | .status_code 00:37:40.148 | .command_transient_transport_error' 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2118596 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2118596 ']' 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2118596 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.148 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118596 00:37:40.408 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:40.408 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:40.408 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118596' 00:37:40.408 killing process with pid 2118596 00:37:40.408 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2118596 00:37:40.408 Received shutdown signal, test time was about 2.000000 seconds 00:37:40.408 00:37:40.408 Latency(us) 00:37:40.408 [2024-10-09T09:18:00.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.408 [2024-10-09T09:18:00.410Z] =================================================================================================================== 00:37:40.408 [2024-10-09T09:18:00.410Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2118596 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2119301 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2119301 /var/tmp/bperf.sock 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2119301 ']' 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.409 11:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.409 [2024-10-09 11:18:00.309953] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:40.409 [2024-10-09 11:18:00.310012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119301 ] 00:37:40.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:40.409 Zero copy mechanism will not be used. 00:37:40.669 [2024-10-09 11:18:00.440334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:40.669 [2024-10-09 11:18:00.486890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.669 [2024-10-09 11:18:00.503108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.240 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:41.240 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:37:41.240 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:41.240 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.501 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:41.501 nvme0n1 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:41.769 11:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:41.769 Zero copy mechanism will not be used. 00:37:41.769 Running I/O for 2 seconds... 00:37:41.769 [2024-10-09 11:18:01.623387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.769 [2024-10-09 11:18:01.623719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-10-09 11:18:01.623747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.769 [2024-10-09 11:18:01.635094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.769 [2024-10-09 11:18:01.635353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-10-09 11:18:01.635373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.769 [2024-10-09 11:18:01.647508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.769 [2024-10-09 11:18:01.647881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.647900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.658828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.659152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.659171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.670384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.670602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.670621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.681422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.681730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.681748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.693166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.693522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.693540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.704332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.704565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.704583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.716463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.716830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.726085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.726318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.726336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.733518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.733721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.733740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.742024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.742338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.742356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.750711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.751017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.751034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:41.770 [2024-10-09 11:18:01.759507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:41.770 [2024-10-09 11:18:01.759745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-10-09 11:18:01.759763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.769632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.770042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.770060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.781455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.781766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.792844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.793200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.793218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.804249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.804469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.804487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.815334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.815653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.815672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.826810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.827113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.827131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.838237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.838473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.838490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.850111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.850455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.862591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.862938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.862956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.874339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.874670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.874688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.885337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.885560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.885577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.897630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.897999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.898016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.909125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.909499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.909517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.921079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.921433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.921451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.932706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.933085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.933102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.944075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.944315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.955489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.955811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.955829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.966729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.967069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.967087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.978270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.978621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.978640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:01.990071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:01.990366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:01.990384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:02.001240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:02.001590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:02.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:02.012443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:02.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:02.012728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:02.023553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:02.023645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:02.023660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.055 [2024-10-09 11:18:02.035214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.055 [2024-10-09 11:18:02.035547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.055 [2024-10-09 11:18:02.035565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.046534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.046886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.046904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.057834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.058199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.058216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.069502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.069746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.069762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.081170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.081514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.081535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.092477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.092899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.092916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.103763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.104100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.115675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.116045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.116063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.127133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.127377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.127393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.138450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.138820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.138838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.146123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.146423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.359 [2024-10-09 11:18:02.154714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.359 [2024-10-09 11:18:02.155048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.359 [2024-10-09 11:18:02.155066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.161980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.162181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.162197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.170765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.171112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.179481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.179728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.179745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.190479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.190784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.190802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.199769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.200022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.206744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.206959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.206976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.214281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.214587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.214604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.221961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.222160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.222177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.228698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.228900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.228917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.233569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.233771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.233788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.238049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.238254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.238271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.242150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.242354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.242371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.246118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.246412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.246429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.252432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.252773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.252791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.257801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.258129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.258147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.264373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.264580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.264597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.268823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.269041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.274276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.274497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.274514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.278564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.278760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.278780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.282796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.282998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.283016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.287928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.288129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.288146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.293049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.293361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.293379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.297825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.298131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.298149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.303664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.304041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.304058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.309512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.309848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.309866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.316859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.317194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.317212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.323753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.323956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.323972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.360 [2024-10-09 11:18:02.331633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.360 [2024-10-09 11:18:02.331957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.360 [2024-10-09 11:18:02.331974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.340676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.341002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.341020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.348184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.348415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.348432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.355512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.355763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.355781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.361958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.362183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.362200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.366097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.366300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.373628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.373972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.373989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.381065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.381266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.381282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.389308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.389661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.389682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.396884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.397098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.397114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.403504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.403734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.403751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.410243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.410445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.410461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.417406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.417755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.417772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.422945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.423278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.423296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.429649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.429853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.429870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.436279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.436585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.436604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.442301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.442516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.442533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.450474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.450812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.459244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.459601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.466664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.466966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.466983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.472629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.472900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.478778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.479081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.487315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.487532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.487549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.495059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.495399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.495416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.502952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.624 [2024-10-09 11:18:02.503283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.624 [2024-10-09 11:18:02.503300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.624 [2024-10-09 11:18:02.512656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.512963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.512981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.522334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.522718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.522736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.532520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.532839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.532856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.540424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.540635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.540652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.550289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.550632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.550650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.560436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.560699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.560717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.569387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.569728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.569746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.578108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.578400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.578417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.586399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.586727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.586745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.594825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.595063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.595083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.603903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.603957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.603973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.625 3488.00 IOPS, 436.00 MiB/s [2024-10-09T09:18:02.627Z] [2024-10-09 11:18:02.614824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.615111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.625 [2024-10-09 11:18:02.622830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.625 [2024-10-09 11:18:02.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.625 [2024-10-09 11:18:02.623149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.630275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.630445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.630462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.638640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.638820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.647693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.647916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.654557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.654840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.654857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.661989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.662262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.662279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.670896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.671157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.671175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.680542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.680803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.680821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.685579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.685744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.685761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.692189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.692501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.692519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.697834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.698115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.698133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.703667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.703911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.711041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.711302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.711320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.718089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.718257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.718273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.721935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.722097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.722114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.726832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.727056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.727073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.736700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.736900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.736916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.746690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.746954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.746972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.756954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.887 [2024-10-09 11:18:02.757188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.887 [2024-10-09 11:18:02.757205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.887 [2024-10-09 11:18:02.766865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.767116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.767134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.770868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.771032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.771049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.774730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.775020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.778274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.778451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.778473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.782540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.782707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.782729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.785950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.786121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.786139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.791685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.791944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.791962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.795352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.795509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.795526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.798771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.799058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.799075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.802078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.802230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.802246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.807299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.807461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.807483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.812605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.812768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.812785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.816236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.816406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.819473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.819626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.819644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.822749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.822901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.822917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.825995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.826146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.826163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.829360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.829518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.829535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.833366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.833516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.833532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.841075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.841327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.841345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.847717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.847957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.847974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.854739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.855106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.855123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.860087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.860372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.860393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.865680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.865832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.865849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.869109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.869257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.869274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.872662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.872714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.872730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.877038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.877255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.877272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.882691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.882762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.882779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:42.888 [2024-10-09 11:18:02.886339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:42.888 [2024-10-09 11:18:02.886499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.888 [2024-10-09 11:18:02.886516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.151 [2024-10-09 11:18:02.896166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.151 [2024-10-09 11:18:02.896521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.151 [2024-10-09 11:18:02.896537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.151 [2024-10-09 11:18:02.905327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.151 [2024-10-09 11:18:02.905708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.151 [2024-10-09 11:18:02.905724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.151 [2024-10-09 11:18:02.915930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.151 [2024-10-09 11:18:02.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.151 [2024-10-09 11:18:02.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.151 [2024-10-09 11:18:02.925739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.151 [2024-10-09 11:18:02.925969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.151 [2024-10-09 11:18:02.925985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.935388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.935682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.935699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.945374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.945645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.945662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.955058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.955292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.955308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.965038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.965259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.965276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.976095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.976426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.984501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.984578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.984594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:02.991992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:02.992218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:02.992235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.001392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.001629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.001646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.011067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.011131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.011147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.019386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.019687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.019704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.027594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.027659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.036680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.036968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.044894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.044968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.044985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.051944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.051996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.052012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.060870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.061101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.061117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.068562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.068782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.068802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.076741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.077029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.077046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.084742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.084809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.084825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.093926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.093994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.094010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.101637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.101858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.101875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.110375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.110616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.110633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.117918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.118029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.118045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.126320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.126583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.133163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.133277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.141211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.141529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.146647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.146728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.146744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.152 [2024-10-09 11:18:03.151520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.152 [2024-10-09 11:18:03.151798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.152 [2024-10-09 11:18:03.151814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.159945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.160002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.160017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.167913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.168136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.168153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.176509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.176708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.176725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.183453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.183750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.183767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.191047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.191271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.191288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.198924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.199176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.199192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.206772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.206826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.206842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.212711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.212941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.212958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.217482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.217570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.217586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.221160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.221216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.227266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.227493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.227510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.231591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.231673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.231690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.237175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.237227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.240719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.240835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.240852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.244287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.244347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.244368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.247593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.247672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.250823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.250922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.250939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.254376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.254497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.254515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.415 [2024-10-09 11:18:03.257694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.415 [2024-10-09 11:18:03.257749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.415 [2024-10-09 11:18:03.257765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.260930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.260988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.261003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.264176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.264228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.264244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.267455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.267519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.267535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.270868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.270928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.274067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.274128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.274144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.277294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.277353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.280510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.280573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.280588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.283731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.283790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.283806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.286925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.286989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.287005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.290158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.290217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.290233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.295106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.295172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.295187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.302353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.302417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.302433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.308952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.314979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.315037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.315053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.318482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.318542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.318557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.321879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.321936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.321952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.325144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.325204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.325219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.328386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.328447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.331635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.331697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.331713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.334868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.334926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.334942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.341417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.341675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.341692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.348921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.348988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.349006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.355136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.355215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.355232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.360174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.360255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.366160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.366397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.366413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.373657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.378498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.378650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.383529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.383825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.416 [2024-10-09 11:18:03.383842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.416 [2024-10-09 11:18:03.393506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.416 [2024-10-09 11:18:03.393706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.417 [2024-10-09 11:18:03.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.417 [2024-10-09 11:18:03.403601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.417 [2024-10-09 11:18:03.403893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.417 [2024-10-09 11:18:03.403910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.417 [2024-10-09 11:18:03.413794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.417 [2024-10-09 11:18:03.413987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.417 [2024-10-09 11:18:03.414003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.424408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.424731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.424748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.434431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.434706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.434723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.444561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.444833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.454723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.454985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.455002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.464948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.465011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.465026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.475160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.475259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.475276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.485738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.485919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.485936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.496218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.496486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.496502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.507113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.507414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.507430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.516850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.517061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.517078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.527028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.527300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.527317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.536926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.537190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.537207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.547025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.547126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.547143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.555975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.556233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.556249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.562937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.563167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.563183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.566582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.566638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.569866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.569942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.569961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.573146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.573207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.678 [2024-10-09 11:18:03.573222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.678 [2024-10-09 11:18:03.576432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.678 [2024-10-09 11:18:03.576498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.576514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.579731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.579790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.579805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.582991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.583071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.583088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.586231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.586293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.586309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.589449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.589520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.589536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.592676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.592738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.592755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.596346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.596435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.599580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.599647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.599663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.602819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.602887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.602904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.606026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.606082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.606098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:43.679 [2024-10-09 11:18:03.609246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1616430) with pdu=0x2000166fef90 00:37:43.679 [2024-10-09 11:18:03.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.679 [2024-10-09 11:18:03.609324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:43.679 4188.50 IOPS, 523.56 MiB/s 00:37:43.679 Latency(us) 00:37:43.679 [2024-10-09T09:18:03.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.679 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:43.679 nvme0n1 : 2.00 4190.23 523.78 0.00 0.00 3813.14 1395.90 12207.26 00:37:43.679 [2024-10-09T09:18:03.681Z] =================================================================================================================== 00:37:43.679 [2024-10-09T09:18:03.681Z] Total : 4190.23 523.78 0.00 0.00 3813.14 1395.90 12207.26 00:37:43.679 { 00:37:43.679 "results": [ 00:37:43.679 { 00:37:43.679 "job": "nvme0n1", 00:37:43.679 "core_mask": "0x2", 00:37:43.679 "workload": "randwrite", 00:37:43.679 "status": "finished", 00:37:43.679 "queue_depth": 16, 00:37:43.679 "io_size": 131072, 00:37:43.679 "runtime": 2.004185, 00:37:43.679 "iops": 4190.231939666249, 00:37:43.679 "mibps": 523.7789924582811, 00:37:43.679 "io_failed": 0, 00:37:43.679 "io_timeout": 0, 00:37:43.679 "avg_latency_us": 3813.1386126252996, 00:37:43.679 "min_latency_us": 1395.8970932175075, 00:37:43.679 "max_latency_us": 12207.256932843302 00:37:43.679 } 00:37:43.679 ], 00:37:43.679 "core_count": 1 00:37:43.679 } 00:37:43.679 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:43.679 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:43.679 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:43.679 | .driver_specific 00:37:43.679 | .nvme_error 00:37:43.679 | .status_code 00:37:43.679 | .command_transient_transport_error' 00:37:43.679 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2119301 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2119301 ']' 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2119301 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2119301 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2119301' 00:37:43.940 killing process with pid 2119301 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2119301 00:37:43.940 Received shutdown signal, test time was about 2.000000 seconds 00:37:43.940 00:37:43.940 Latency(us) 00:37:43.940 [2024-10-09T09:18:03.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.940 [2024-10-09T09:18:03.942Z] =================================================================================================================== 00:37:43.940 [2024-10-09T09:18:03.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.940 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2119301 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2116878 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2116878 ']' 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2116878 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:44.200 11:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2116878 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2116878' 00:37:44.200 killing process with pid 2116878 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2116878 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2116878 00:37:44.200 00:37:44.200 real 0m16.580s 00:37:44.200 user 0m32.380s 00:37:44.200 sys 0m3.532s 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:44.200 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.200 ************************************ 00:37:44.200 END TEST nvmf_digest_error 00:37:44.200 ************************************ 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:44.201 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:44.201 rmmod nvme_tcp 00:37:44.461 rmmod nvme_fabrics 00:37:44.461 rmmod nvme_keyring 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2116878 ']' 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2116878 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2116878 ']' 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2116878 00:37:44.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2116878) - No such process 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2116878 is not found' 00:37:44.461 Process with pid 2116878 is not found 00:37:44.461 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.462 11:18:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.374 11:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.374 00:37:46.374 real 0m43.352s 00:37:46.374 user 1m7.171s 00:37:46.374 sys 0m12.805s 00:37:46.374 11:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:46.374 11:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.374 ************************************ 00:37:46.374 END TEST nvmf_digest 00:37:46.374 ************************************ 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.635 ************************************ 00:37:46.635 START TEST nvmf_bdevperf 00:37:46.635 ************************************ 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:46.635 * Looking for test storage... 00:37:46.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.635 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.636 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:46.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.897 --rc genhtml_branch_coverage=1 00:37:46.897 --rc genhtml_function_coverage=1 00:37:46.897 --rc genhtml_legend=1 00:37:46.897 --rc geninfo_all_blocks=1 00:37:46.897 --rc geninfo_unexecuted_blocks=1 00:37:46.897 00:37:46.897 ' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:46.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.897 --rc genhtml_branch_coverage=1 00:37:46.897 --rc genhtml_function_coverage=1 00:37:46.897 --rc genhtml_legend=1 00:37:46.897 --rc geninfo_all_blocks=1 00:37:46.897 --rc geninfo_unexecuted_blocks=1 00:37:46.897 00:37:46.897 ' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:46.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.897 --rc genhtml_branch_coverage=1 00:37:46.897 --rc genhtml_function_coverage=1 00:37:46.897 --rc genhtml_legend=1 00:37:46.897 --rc geninfo_all_blocks=1 00:37:46.897 --rc geninfo_unexecuted_blocks=1 00:37:46.897 00:37:46.897 ' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:46.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.897 --rc genhtml_branch_coverage=1 00:37:46.897 --rc genhtml_function_coverage=1 00:37:46.897 --rc genhtml_legend=1 00:37:46.897 --rc geninfo_all_blocks=1 00:37:46.897 --rc geninfo_unexecuted_blocks=1 00:37:46.897 00:37:46.897 ' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:46.897 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:46.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:46.898 11:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.035 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:55.036 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:55.036 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:55.036 Found net devices under 0000:31:00.0: cvl_0_0 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:55.036 Found net devices under 0000:31:00.1: cvl_0_1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.036 11:18:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:37:55.036 00:37:55.036 --- 10.0.0.2 ping statistics --- 00:37:55.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.036 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:37:55.036 00:37:55.036 --- 10.0.0.1 ping statistics --- 00:37:55.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.036 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2124911 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2124911 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2124911 ']' 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.036 [2024-10-09 11:18:14.121797] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:55.036 [2024-10-09 11:18:14.121848] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.036 [2024-10-09 11:18:14.258967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:55.036 [2024-10-09 11:18:14.306683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:55.036 [2024-10-09 11:18:14.325027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.036 [2024-10-09 11:18:14.325063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.036 [2024-10-09 11:18:14.325071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.036 [2024-10-09 11:18:14.325079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.036 [2024-10-09 11:18:14.325085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.036 [2024-10-09 11:18:14.326428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:55.036 [2024-10-09 11:18:14.326583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:55.036 [2024-10-09 11:18:14.326676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:55.036 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:37:55.037 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:55.037 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:55.037 11:18:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.037 [2024-10-09 11:18:15.015694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.037 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.297 Malloc0 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:55.297 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.298 [2024-10-09 11:18:15.083441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:55.298 { 00:37:55.298 "params": { 00:37:55.298 "name": "Nvme$subsystem", 00:37:55.298 "trtype": "$TEST_TRANSPORT", 00:37:55.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:55.298 "adrfam": "ipv4", 00:37:55.298 "trsvcid": "$NVMF_PORT", 00:37:55.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:55.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:55.298 "hdgst": ${hdgst:-false}, 00:37:55.298 "ddgst": ${ddgst:-false} 00:37:55.298 }, 00:37:55.298 "method": "bdev_nvme_attach_controller" 00:37:55.298 } 00:37:55.298 EOF 00:37:55.298 )") 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:37:55.298 11:18:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:55.298 "params": { 00:37:55.298 "name": "Nvme1", 00:37:55.298 "trtype": "tcp", 00:37:55.298 "traddr": "10.0.0.2", 00:37:55.298 "adrfam": "ipv4", 00:37:55.298 "trsvcid": "4420", 00:37:55.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:55.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:55.298 "hdgst": false, 00:37:55.298 "ddgst": false 00:37:55.298 }, 00:37:55.298 "method": "bdev_nvme_attach_controller" 00:37:55.298 }' 00:37:55.298 [2024-10-09 11:18:15.138777] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:55.298 [2024-10-09 11:18:15.138825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125079 ] 00:37:55.298 [2024-10-09 11:18:15.268731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:55.558 [2024-10-09 11:18:15.299394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.558 [2024-10-09 11:18:15.317563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.818 Running I/O for 1 seconds... 00:37:56.757 8790.00 IOPS, 34.34 MiB/s 00:37:56.757 Latency(us) 00:37:56.757 [2024-10-09T09:18:16.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:56.757 Verification LBA range: start 0x0 length 0x4000 00:37:56.757 Nvme1n1 : 1.00 8879.18 34.68 0.00 0.00 14354.93 1300.10 13356.82 00:37:56.757 [2024-10-09T09:18:16.759Z] =================================================================================================================== 00:37:56.757 [2024-10-09T09:18:16.759Z] Total : 8879.18 34.68 0.00 0.00 14354.93 1300.10 13356.82 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2125351 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:56.757 { 00:37:56.757 "params": { 00:37:56.757 "name": "Nvme$subsystem", 00:37:56.757 "trtype": "$TEST_TRANSPORT", 00:37:56.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.757 "adrfam": "ipv4", 00:37:56.757 "trsvcid": "$NVMF_PORT", 00:37:56.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.757 "hdgst": ${hdgst:-false}, 00:37:56.757 "ddgst": ${ddgst:-false} 00:37:56.757 }, 00:37:56.757 "method": "bdev_nvme_attach_controller" 00:37:56.757 } 00:37:56.757 EOF 00:37:56.757 )") 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:37:56.757 11:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:56.757 "params": { 00:37:56.757 "name": "Nvme1", 00:37:56.757 "trtype": "tcp", 00:37:56.757 "traddr": "10.0.0.2", 00:37:56.757 "adrfam": "ipv4", 00:37:56.757 "trsvcid": "4420", 00:37:56.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:56.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:56.757 "hdgst": false, 00:37:56.757 "ddgst": false 00:37:56.757 }, 00:37:56.757 "method": "bdev_nvme_attach_controller" 00:37:56.757 }' 00:37:57.017 [2024-10-09 11:18:16.788968] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:37:57.017 [2024-10-09 11:18:16.789024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125351 ] 00:37:57.017 [2024-10-09 11:18:16.918971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:57.017 [2024-10-09 11:18:16.949365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.017 [2024-10-09 11:18:16.966898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.277 Running I/O for 15 seconds... 00:37:59.156 10772.00 IOPS, 42.08 MiB/s [2024-10-09T09:18:20.103Z] 11171.00 IOPS, 43.64 MiB/s [2024-10-09T09:18:20.103Z] 11:18:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2124911 00:38:00.101 11:18:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:00.101 [2024-10-09 11:18:19.751146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.101 [2024-10-09 11:18:19.751661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.751980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.101 [2024-10-09 11:18:19.751994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.101 [2024-10-09 11:18:19.752008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.102 [2024-10-09 11:18:19.752863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.102 [2024-10-09 11:18:19.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.752982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.752990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:00.103 [2024-10-09 11:18:19.753289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.103 [2024-10-09 11:18:19.753615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.103 [2024-10-09 11:18:19.753622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.104 [2024-10-09 11:18:19.753639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.104 [2024-10-09 11:18:19.753656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.104 [2024-10-09 11:18:19.753673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:00.104 [2024-10-09 11:18:19.753690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76efc0 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.753709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:00.104 [2024-10-09 11:18:19.753715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:00.104 [2024-10-09 11:18:19.753722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104800 len:8 PRP1 0x0 PRP2 0x0 00:38:00.104 [2024-10-09 11:18:19.753730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753767] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x76efc0 was disconnected and freed. reset controller. 00:38:00.104 [2024-10-09 11:18:19.753809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:00.104 [2024-10-09 11:18:19.753820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:00.104 [2024-10-09 11:18:19.753836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:00.104 [2024-10-09 11:18:19.753851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:00.104 [2024-10-09 11:18:19.753867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:00.104 [2024-10-09 11:18:19.753876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.757400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.757421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.758195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.758215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.758224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.758446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.758673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.758683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.758693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.762232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.771438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.771991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.772031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.772044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.772283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.772517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.772528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.772536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.776084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.785288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.785969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.786009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.786021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.786260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.786493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.786505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.786513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.790068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.799063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.799782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.799825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.799837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.800076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.800301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.800311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.800320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.803892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.812894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.813564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.813604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.813618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.813860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.814084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.814094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.814102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.817668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.826716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.827327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.827366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.827379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.827627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.827853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.827863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.827871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.831422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.840630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.841249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.841289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.841301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.841551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.841782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.841792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.104 [2024-10-09 11:18:19.841800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.104 [2024-10-09 11:18:19.845355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.104 [2024-10-09 11:18:19.854454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.104 [2024-10-09 11:18:19.855074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.104 [2024-10-09 11:18:19.855114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.104 [2024-10-09 11:18:19.855126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.104 [2024-10-09 11:18:19.855365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.104 [2024-10-09 11:18:19.855599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.104 [2024-10-09 11:18:19.855611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.855619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.859173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.868386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.869064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.869103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.869115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.869353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.869588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.869599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.869608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.873162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.882167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.882839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.882879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.882890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.883130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.883355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.883365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.883373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.886931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.896134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.896770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.896809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.896821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.897060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.897284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.897294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.897302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.900864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.910082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.910802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.910842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.910854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.911093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.911318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.911328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.911336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.914902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.923921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.924492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.924513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.924521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.924742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.924963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.924972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.924980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.928525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.937729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.938267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.938306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.938328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.938577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.938803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.938813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.938821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.942374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.951578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.952230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.952269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.952281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.952529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.952755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.952766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.952774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.956326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.965551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.966205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.966245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.966257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.966505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.966731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.105 [2024-10-09 11:18:19.966741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.105 [2024-10-09 11:18:19.966749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.105 [2024-10-09 11:18:19.970303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.105 [2024-10-09 11:18:19.979515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.105 [2024-10-09 11:18:19.980142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.105 [2024-10-09 11:18:19.980181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.105 [2024-10-09 11:18:19.980193] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.105 [2024-10-09 11:18:19.980432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.105 [2024-10-09 11:18:19.980667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:19.980683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:19.980691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:19.984245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:19.993456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:19.994107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:19.994145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:19.994157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:19.994396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:19.994629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:19.994640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:19.994648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:19.998203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.007352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.008056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.008095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.008108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.008347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.008578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.008589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.008598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.012153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.021175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.021758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.021779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.021788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.022009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.022230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.022239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.022247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.025801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.035032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.035599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.035639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.035652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.035893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.036118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.036129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.036137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.039701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.048915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.049600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.049639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.049651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.049889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.050113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.050124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.050132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.053689] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.062893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.063591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.063630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.063641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.063880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.064104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.064114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.064122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.067687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.076687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.077305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.077344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.077355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.077609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.077834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.077844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.077852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.081398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.106 [2024-10-09 11:18:20.090612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.106 [2024-10-09 11:18:20.091286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.106 [2024-10-09 11:18:20.091324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.106 [2024-10-09 11:18:20.091335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.106 [2024-10-09 11:18:20.091585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.106 [2024-10-09 11:18:20.091810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.106 [2024-10-09 11:18:20.091820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.106 [2024-10-09 11:18:20.091828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.106 [2024-10-09 11:18:20.095374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.368 [2024-10-09 11:18:20.104599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.368 [2024-10-09 11:18:20.105272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.368 [2024-10-09 11:18:20.105311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.368 [2024-10-09 11:18:20.105323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.368 [2024-10-09 11:18:20.105569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.368 [2024-10-09 11:18:20.105795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.368 [2024-10-09 11:18:20.105806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.105814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.109374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 9805.33 IOPS, 38.30 MiB/s [2024-10-09T09:18:20.371Z] [2024-10-09 11:18:20.118581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.119133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.119153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.119161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.119381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.119616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.119626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.119638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.123187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.132391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.132961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.132979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.132988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.133206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.133426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.133436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.133443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.136991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.146193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.146743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.146760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.146768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.146987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.147206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.147215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.147222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.150765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.159958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.160595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.160635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.160647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.160888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.161112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.161122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.161130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.164685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.173918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.174618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.174658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.174669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.174908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.175132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.175144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.175152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.178705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.187700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.188240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.188260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.188268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.188496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.188716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.188724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.188732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.192267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.201476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.202138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.202176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.202187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.202426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.202658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.202669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.202677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.206243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.215253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.215918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.215957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.215968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.216207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.216435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.216445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.216453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.220019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.229230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.229911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.229950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.229962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.230201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.230425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.230435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.230443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.234007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.243041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.243592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.243631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.369 [2024-10-09 11:18:20.243644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.369 [2024-10-09 11:18:20.243886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.369 [2024-10-09 11:18:20.244110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.369 [2024-10-09 11:18:20.244120] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.369 [2024-10-09 11:18:20.244128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.369 [2024-10-09 11:18:20.247687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.369 [2024-10-09 11:18:20.256893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.369 [2024-10-09 11:18:20.257583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.369 [2024-10-09 11:18:20.257621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.257634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.257875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.258099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.258110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.258119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.261821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.270847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.271427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.271446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.271455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.271679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.271900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.271909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.271916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.275460] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.284664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.285321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.285360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.285371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.285618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.285843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.285852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.285860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.289411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.298629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.299167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.299187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.299195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.299415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.299641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.299650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.299657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.303200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.312421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.312841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.312858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.312871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.313091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.313310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.313319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.313326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.316875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.326302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.326820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.326859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.326870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.327109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.327333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.327342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.327350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.330912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.340125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.340592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.340631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.340644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.340884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.341108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.341118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.341126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.344686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.354096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.354795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.354834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.354846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.370 [2024-10-09 11:18:20.355084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.370 [2024-10-09 11:18:20.355308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.370 [2024-10-09 11:18:20.355322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.370 [2024-10-09 11:18:20.355330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.370 [2024-10-09 11:18:20.358888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.370 [2024-10-09 11:18:20.367892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.370 [2024-10-09 11:18:20.368434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.370 [2024-10-09 11:18:20.368453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.370 [2024-10-09 11:18:20.368462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.632 [2024-10-09 11:18:20.368688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.632 [2024-10-09 11:18:20.368909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.632 [2024-10-09 11:18:20.368918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.632 [2024-10-09 11:18:20.368926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.632 [2024-10-09 11:18:20.372474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.632 [2024-10-09 11:18:20.381683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.632 [2024-10-09 11:18:20.382350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.632 [2024-10-09 11:18:20.382388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.632 [2024-10-09 11:18:20.382399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.632 [2024-10-09 11:18:20.382645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.632 [2024-10-09 11:18:20.382870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.632 [2024-10-09 11:18:20.382880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.632 [2024-10-09 11:18:20.382889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.632 [2024-10-09 11:18:20.386432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.632 [2024-10-09 11:18:20.395636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.632 [2024-10-09 11:18:20.396200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.632 [2024-10-09 11:18:20.396219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.632 [2024-10-09 11:18:20.396228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.632 [2024-10-09 11:18:20.396447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.632 [2024-10-09 11:18:20.396674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.632 [2024-10-09 11:18:20.396684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.632 [2024-10-09 11:18:20.396691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.632 [2024-10-09 11:18:20.400232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.632 [2024-10-09 11:18:20.409446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.632 [2024-10-09 11:18:20.409999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.632 [2024-10-09 11:18:20.410039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.632 [2024-10-09 11:18:20.410051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.632 [2024-10-09 11:18:20.410292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.632 [2024-10-09 11:18:20.410524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.632 [2024-10-09 11:18:20.410535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.632 [2024-10-09 11:18:20.410542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.632 [2024-10-09 11:18:20.414095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.632 [2024-10-09 11:18:20.423320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.632 [2024-10-09 11:18:20.423977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.632 [2024-10-09 11:18:20.424016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.632 [2024-10-09 11:18:20.424027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.632 [2024-10-09 11:18:20.424266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.632 [2024-10-09 11:18:20.424498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.632 [2024-10-09 11:18:20.424509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.632 [2024-10-09 11:18:20.424517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.632 [2024-10-09 11:18:20.428066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.437268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.437836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.437856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.437864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.438085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.438304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.438314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.438321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.441868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.451102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.451606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.451645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.451657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.451904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.452128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.452138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.452146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.455703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.464916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.465507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.465527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.465536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.465756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.465981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.465992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.465999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.469546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.478742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.479268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.479285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.479293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.479518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.479738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.479747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.479754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.483290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.492698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.493342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.493381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.493392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.493639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.493864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.493874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.493886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.497437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.506654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.507331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.507370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.507381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.507628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.507853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.507863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.507870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.511426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.520460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.520932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.520952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.520961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.521180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.521400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.521409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.521416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.524974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.534402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.535072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.535111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.535122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.535361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.535595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.535606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.535614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.539176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.548183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.548706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.548726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.548735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.548955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.549174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.549183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.549190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.552738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.562157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.562901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.562940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.562952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.563192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.563416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.563425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.563433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.633 [2024-10-09 11:18:20.566993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.633 [2024-10-09 11:18:20.575990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.633 [2024-10-09 11:18:20.576606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.633 [2024-10-09 11:18:20.576645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.633 [2024-10-09 11:18:20.576658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.633 [2024-10-09 11:18:20.576901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.633 [2024-10-09 11:18:20.577125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.633 [2024-10-09 11:18:20.577135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.633 [2024-10-09 11:18:20.577143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.634 [2024-10-09 11:18:20.580705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.634 [2024-10-09 11:18:20.589929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.634 [2024-10-09 11:18:20.590560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.634 [2024-10-09 11:18:20.590598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.634 [2024-10-09 11:18:20.590609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.634 [2024-10-09 11:18:20.590849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.634 [2024-10-09 11:18:20.591077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.634 [2024-10-09 11:18:20.591087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.634 [2024-10-09 11:18:20.591095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.634 [2024-10-09 11:18:20.594652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.634 [2024-10-09 11:18:20.603860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.634 [2024-10-09 11:18:20.604434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.634 [2024-10-09 11:18:20.604454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.634 [2024-10-09 11:18:20.604462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.634 [2024-10-09 11:18:20.604689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.634 [2024-10-09 11:18:20.604909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.634 [2024-10-09 11:18:20.604918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.634 [2024-10-09 11:18:20.604925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.634 [2024-10-09 11:18:20.608476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.634 [2024-10-09 11:18:20.617686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.634 [2024-10-09 11:18:20.618255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.634 [2024-10-09 11:18:20.618273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.634 [2024-10-09 11:18:20.618281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.634 [2024-10-09 11:18:20.618506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.634 [2024-10-09 11:18:20.618727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.634 [2024-10-09 11:18:20.618736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.634 [2024-10-09 11:18:20.618743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.634 [2024-10-09 11:18:20.622299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.634 [2024-10-09 11:18:20.631506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.634 [2024-10-09 11:18:20.631956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.634 [2024-10-09 11:18:20.631974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.634 [2024-10-09 11:18:20.631982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.634 [2024-10-09 11:18:20.632201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.634 [2024-10-09 11:18:20.632421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.634 [2024-10-09 11:18:20.632430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.634 [2024-10-09 11:18:20.632438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.635992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.645407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.645969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.645986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.645994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.646213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.646432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.646441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.646448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.649996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.659220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.659803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.659819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.659828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.660047] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.660266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.660276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.660283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.663833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.673061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.673595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.673634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.673648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.673891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.674114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.674125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.674132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.677699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.686913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.687572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.687612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.687629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.687870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.688094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.688104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.688112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.691671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.700889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.701556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.701596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.701608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.701848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.702072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.702082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.702090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.705662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.714681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.715260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.715280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.715289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.715516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.715737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.715746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.715754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.719306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.896 [2024-10-09 11:18:20.728551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.896 [2024-10-09 11:18:20.728994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.896 [2024-10-09 11:18:20.729012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.896 [2024-10-09 11:18:20.729022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.896 [2024-10-09 11:18:20.729243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.896 [2024-10-09 11:18:20.729462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.896 [2024-10-09 11:18:20.729484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.896 [2024-10-09 11:18:20.729491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.896 [2024-10-09 11:18:20.733040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.742470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.743050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.743066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.743074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.743294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.743521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.743530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.743537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.747092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.756304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.756963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.757003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.757015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.757255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.757490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.757502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.757510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.761062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.770288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.770999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.771039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.771051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.771292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.771526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.771537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.771544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.775099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.784115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.784590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.784610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.784619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.784840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.785060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.785069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.785076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.788627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.798051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.798611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.798630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.798638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.798857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.799076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.799085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.799093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.802642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.811871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.812436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.812453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.812461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.812687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.812907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.812916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.812923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.816473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.825699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.826359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.826399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.826415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.826663] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.826888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.826898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.826906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.830462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.839523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.840065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.840085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.840093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.840313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.840540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.840550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.840558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.844108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.853323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.853859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.853876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.853884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.854103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.854322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.854331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.854338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.857889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.867134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.867681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.867699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.867707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.867927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.868146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.868155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.868166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.897 [2024-10-09 11:18:20.871721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.897 [2024-10-09 11:18:20.881045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.897 [2024-10-09 11:18:20.881702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.897 [2024-10-09 11:18:20.881741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.897 [2024-10-09 11:18:20.881753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.897 [2024-10-09 11:18:20.881992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.897 [2024-10-09 11:18:20.882215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.897 [2024-10-09 11:18:20.882225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.897 [2024-10-09 11:18:20.882232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:00.898 [2024-10-09 11:18:20.885793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:00.898 [2024-10-09 11:18:20.895013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:00.898 [2024-10-09 11:18:20.895616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.898 [2024-10-09 11:18:20.895656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:00.898 [2024-10-09 11:18:20.895669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:00.898 [2024-10-09 11:18:20.895911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:00.898 [2024-10-09 11:18:20.896135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:00.898 [2024-10-09 11:18:20.896145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:00.898 [2024-10-09 11:18:20.896153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.899710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.908945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.909517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.909543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.909551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.909776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.909997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.159 [2024-10-09 11:18:20.910006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.159 [2024-10-09 11:18:20.910013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.913567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.922795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.923256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.923273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.923281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.923508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.923729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.159 [2024-10-09 11:18:20.923738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.159 [2024-10-09 11:18:20.923746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.927287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.936710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.937363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.937402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.937413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.937661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.937886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.159 [2024-10-09 11:18:20.937895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.159 [2024-10-09 11:18:20.937903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.941458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.950679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.951260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.951279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.951288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.951515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.951736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.159 [2024-10-09 11:18:20.951745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.159 [2024-10-09 11:18:20.951752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.955300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.964508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.965067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.965084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.965092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.965316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.965543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.159 [2024-10-09 11:18:20.965552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.159 [2024-10-09 11:18:20.965560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.159 [2024-10-09 11:18:20.969105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.159 [2024-10-09 11:18:20.978322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.159 [2024-10-09 11:18:20.978857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.159 [2024-10-09 11:18:20.978874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.159 [2024-10-09 11:18:20.978882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.159 [2024-10-09 11:18:20.979102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.159 [2024-10-09 11:18:20.979321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:20.979330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:20.979337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:20.982891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:20.992306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:20.992844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:20.992860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:20.992868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:20.993087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:20.993306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:20.993315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:20.993322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:20.996878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.006091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.006699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.006739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.006750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.006989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.007213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.007223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.007230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.010798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.020017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.020729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.020768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.020781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.021021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.021246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.021255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.021263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.024823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.033975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.034662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.034701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.034713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.034951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.035176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.035186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.035194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.038749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.047958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.048573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.048612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.048625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.048866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.049090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.049100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.049108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.052669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.061888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.062512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.062556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.062567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.062805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.063029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.063039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.063047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.066610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.075859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.076544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.076584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.076595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.076834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.077057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.077067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.077075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.080638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.089645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.090279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.090318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.090329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.090579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.090805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.090814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.090822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.094377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.103595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.104229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.104267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.104278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.104528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.104757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.104767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.104775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.108338] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 7354.00 IOPS, 28.73 MiB/s [2024-10-09T09:18:21.162Z] [2024-10-09 11:18:21.117546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.118180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.118218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.118229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.160 [2024-10-09 11:18:21.118479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.160 [2024-10-09 11:18:21.118704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.160 [2024-10-09 11:18:21.118713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.160 [2024-10-09 11:18:21.118722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.160 [2024-10-09 11:18:21.122284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.160 [2024-10-09 11:18:21.131506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.160 [2024-10-09 11:18:21.132139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.160 [2024-10-09 11:18:21.132178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.160 [2024-10-09 11:18:21.132190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.161 [2024-10-09 11:18:21.132429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.161 [2024-10-09 11:18:21.132664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.161 [2024-10-09 11:18:21.132675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.161 [2024-10-09 11:18:21.132683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.161 [2024-10-09 11:18:21.136240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.161 [2024-10-09 11:18:21.145448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.161 [2024-10-09 11:18:21.146119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.161 [2024-10-09 11:18:21.146159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.161 [2024-10-09 11:18:21.146170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.161 [2024-10-09 11:18:21.146410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.161 [2024-10-09 11:18:21.146645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.161 [2024-10-09 11:18:21.146655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.161 [2024-10-09 11:18:21.146663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.161 [2024-10-09 11:18:21.150215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.161 [2024-10-09 11:18:21.159233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.159813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.159834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.159842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.160062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.160282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.160291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.160298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.163852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.173069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.173611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.173629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.173637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.173857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.174075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.174085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.174092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.177636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.186846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.187407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.187423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.187431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.187657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.187877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.187885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.187892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.191432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.200638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.201270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.201309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.201325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.201574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.201799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.201809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.201816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.205368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.214590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.215247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.215287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.215298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.215546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.215771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.215781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.215789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.219343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.228566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.229199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.229238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.229250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.422 [2024-10-09 11:18:21.229500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.422 [2024-10-09 11:18:21.229725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.422 [2024-10-09 11:18:21.229735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.422 [2024-10-09 11:18:21.229742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.422 [2024-10-09 11:18:21.233300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.422 [2024-10-09 11:18:21.242522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.422 [2024-10-09 11:18:21.243176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.422 [2024-10-09 11:18:21.243215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.422 [2024-10-09 11:18:21.243228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.243479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.243704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.243718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.243726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.247279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.256495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.257182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.257220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.257232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.257481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.257707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.257716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.257724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.261275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.270283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.270704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.270725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.270734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.270955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.271175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.271185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.271192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.274747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.284200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.284835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.284874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.284885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.285124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.285349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.285358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.285366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.288930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.298140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.298701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.298721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.298730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.298950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.299169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.299178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.299185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.302736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.311947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.312511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.312529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.312537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.312756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.312976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.312984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.312992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.316542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.325762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.326422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.326461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.326483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.326722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.326946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.326956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.326963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.330518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.339733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.340402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.340440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.340452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.340710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.340934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.340944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.340952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.344509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.353530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.354203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.354242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.354253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.354501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.354726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.354736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.354744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.358290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.367517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.368150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.368188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.368200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.368438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.368673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.368685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.368692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.372250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.381458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.382130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.382168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.423 [2024-10-09 11:18:21.382179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.423 [2024-10-09 11:18:21.382418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.423 [2024-10-09 11:18:21.382652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.423 [2024-10-09 11:18:21.382663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.423 [2024-10-09 11:18:21.382675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.423 [2024-10-09 11:18:21.386221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.423 [2024-10-09 11:18:21.395434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.423 [2024-10-09 11:18:21.396088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.423 [2024-10-09 11:18:21.396127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.424 [2024-10-09 11:18:21.396139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.424 [2024-10-09 11:18:21.396378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.424 [2024-10-09 11:18:21.396612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.424 [2024-10-09 11:18:21.396623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.424 [2024-10-09 11:18:21.396631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.424 [2024-10-09 11:18:21.400183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.424 [2024-10-09 11:18:21.409398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.424 [2024-10-09 11:18:21.409976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.424 [2024-10-09 11:18:21.409997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.424 [2024-10-09 11:18:21.410005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.424 [2024-10-09 11:18:21.410225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.424 [2024-10-09 11:18:21.410444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.424 [2024-10-09 11:18:21.410454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.424 [2024-10-09 11:18:21.410461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.424 [2024-10-09 11:18:21.414015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.423240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.423829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.423868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.423879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.424117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.424341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.424350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.424358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.427923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.437143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.437675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.437699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.437708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.437929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.438148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.438157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.438164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.441709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.450906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.451461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.451483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.451492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.451711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.451930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.451940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.451947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.455488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.464679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.465357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.465397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.465408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.465657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.465882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.465891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.465899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.469455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.478450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.479110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.479149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.479161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.479399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.479639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.479650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.479657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.483207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.492441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.493080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.493119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.493130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.493368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.493603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.493614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.493622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.497176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.506402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.507078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.507118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.507129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.507368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.507601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.507613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.507621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.511176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.520196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.520780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.520801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.520809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.521030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.521250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.686 [2024-10-09 11:18:21.521260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.686 [2024-10-09 11:18:21.521267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.686 [2024-10-09 11:18:21.524822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.686 [2024-10-09 11:18:21.534042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.686 [2024-10-09 11:18:21.534572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.686 [2024-10-09 11:18:21.534590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.686 [2024-10-09 11:18:21.534599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.686 [2024-10-09 11:18:21.534819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.686 [2024-10-09 11:18:21.535039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.535048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.535055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.538604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.547824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.548353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.548371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.548378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.548604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.548823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.548832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.548839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.552384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.561602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.562261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.562300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.562311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.562559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.562784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.562794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.562802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.566350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.575581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.576254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.576294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.576310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.576558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.576783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.576794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.576801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.580363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.589368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.589948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.589968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.589976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.590196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.590416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.590426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.590433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.593981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.603251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.603917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.603957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.603969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.604208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.604433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.604442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.604450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.608023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.617023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.617697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.617736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.617748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.617988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.618212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.618227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.618235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.621809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.630802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.631444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.631490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.631503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.631742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.631967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.631977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.631985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.635545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.644753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.645309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.645346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.645358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.645608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.645834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.645844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.645853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.649402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.658604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.659132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.659151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.659159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.659379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.659607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.659616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.659624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.687 [2024-10-09 11:18:21.663165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.687 [2024-10-09 11:18:21.672581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.687 [2024-10-09 11:18:21.673036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.687 [2024-10-09 11:18:21.673053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.687 [2024-10-09 11:18:21.673062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.687 [2024-10-09 11:18:21.673281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.687 [2024-10-09 11:18:21.673508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.687 [2024-10-09 11:18:21.673517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.687 [2024-10-09 11:18:21.673525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.688 [2024-10-09 11:18:21.677067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.686480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.687034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.687051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.687059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.687279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.687505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.687515] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.687522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.691063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.700305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.700974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.701014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.701026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.701265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.701499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.701510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.701518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.705068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.714080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.714747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.714786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.714798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.715043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.715268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.715278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.715286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.718850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.727868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.728549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.728588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.728602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.728841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.729066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.729077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.729086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.732642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.741641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.742207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.742227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.742237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.742457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.742684] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.742693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.742701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.746243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.755449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.756074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.949 [2024-10-09 11:18:21.756114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.949 [2024-10-09 11:18:21.756126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.949 [2024-10-09 11:18:21.756365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.949 [2024-10-09 11:18:21.756597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.949 [2024-10-09 11:18:21.756608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.949 [2024-10-09 11:18:21.756621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.949 [2024-10-09 11:18:21.760178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.949 [2024-10-09 11:18:21.769403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.949 [2024-10-09 11:18:21.770040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.770080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.770092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.770331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.770563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.770574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.770582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.774133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.783335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.783874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.783894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.783904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.784124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.784344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.784354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.784361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.787923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.797124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.797744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.797784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.797796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.798036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.798260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.798270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.798278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.801843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.811073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.811768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.811812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.811824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.812063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.812288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.812297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.812306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.815870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.824888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.825523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.825563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.825577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.825816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.826041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.826051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.826059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.829617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.838825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.839403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.839422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.839431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.839659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.839880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.839889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.839897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.843443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.852646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.853182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.853200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.853208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.853427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.853658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.853667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.853675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.857213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.866410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.866973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.866989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.866998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.867217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.867437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.867446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.867454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.871004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.880203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.880821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.880861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.880873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.881112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.881337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.881347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.881355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.884912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.894117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.894779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.894819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.894831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.895070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.895294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.895305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.895313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.898874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.950 [2024-10-09 11:18:21.908124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.950 [2024-10-09 11:18:21.908765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.950 [2024-10-09 11:18:21.908804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.950 [2024-10-09 11:18:21.908816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.950 [2024-10-09 11:18:21.909055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.950 [2024-10-09 11:18:21.909280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.950 [2024-10-09 11:18:21.909289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.950 [2024-10-09 11:18:21.909298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.950 [2024-10-09 11:18:21.912936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.951 [2024-10-09 11:18:21.921957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.951 [2024-10-09 11:18:21.922566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.951 [2024-10-09 11:18:21.922605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.951 [2024-10-09 11:18:21.922619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.951 [2024-10-09 11:18:21.922860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.951 [2024-10-09 11:18:21.923085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.951 [2024-10-09 11:18:21.923094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.951 [2024-10-09 11:18:21.923102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.951 [2024-10-09 11:18:21.926665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:01.951 [2024-10-09 11:18:21.935874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:01.951 [2024-10-09 11:18:21.936562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.951 [2024-10-09 11:18:21.936602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:01.951 [2024-10-09 11:18:21.936615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:01.951 [2024-10-09 11:18:21.936856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:01.951 [2024-10-09 11:18:21.937080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:01.951 [2024-10-09 11:18:21.937090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:01.951 [2024-10-09 11:18:21.937098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:01.951 [2024-10-09 11:18:21.940663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:21.949670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:21.950294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:21.950333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:21.950349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:21.950598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:21.950823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:21.950833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:21.950841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:21.954385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:21.963591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:21.964269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:21.964308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:21.964320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:21.964567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:21.964792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:21.964803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:21.964812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:21.968370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:21.977369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:21.978009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:21.978048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:21.978060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:21.978299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:21.978532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:21.978543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:21.978552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:21.982104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:21.991303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:21.991949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:21.991988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:21.992002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:21.992242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:21.992475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:21.992490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:21.992498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:21.996055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:22.005270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:22.005931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:22.005970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:22.005982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:22.006221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:22.006445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:22.006455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:22.006463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:22.010037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:22.019255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:22.019851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:22.019890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:22.019902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:22.020142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:22.020367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:22.020377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:22.020385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:22.023947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:22.033179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:22.033720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.212 [2024-10-09 11:18:22.033740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.212 [2024-10-09 11:18:22.033748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.212 [2024-10-09 11:18:22.033969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.212 [2024-10-09 11:18:22.034189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.212 [2024-10-09 11:18:22.034198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.212 [2024-10-09 11:18:22.034206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.212 [2024-10-09 11:18:22.037753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.212 [2024-10-09 11:18:22.046985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.212 [2024-10-09 11:18:22.047526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.047544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.047552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.047772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.047992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.048001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.048009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.051684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.060895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.061571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.061610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.061624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.061866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.062091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.062101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.062109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.065673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.074689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.075317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.075357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.075369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.075616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.075842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.075852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.075860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.079410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.088624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.089326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.089366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.089377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.089630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.089856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.089865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.089874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.093424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.102423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.103053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.103093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.103106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.103347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.103579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.103590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.103598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.107160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 5883.20 IOPS, 22.98 MiB/s [2024-10-09T09:18:22.215Z] [2024-10-09 11:18:22.116812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.117524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.117563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.117576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.117817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.118042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.118051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.118060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.121632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.130639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.131270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.131309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.131323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.131571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.131796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.131807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.131820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.135369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.144584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.145279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.145319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.145331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.145578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.145803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.145813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.145821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.149371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.158367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.159055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.159095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.159107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.159347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.159580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.159591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.159599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.163152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.172162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.172743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.172782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.172794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.213 [2024-10-09 11:18:22.173033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.213 [2024-10-09 11:18:22.173257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.213 [2024-10-09 11:18:22.173268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.213 [2024-10-09 11:18:22.173277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.213 [2024-10-09 11:18:22.176830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.213 [2024-10-09 11:18:22.186037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.213 [2024-10-09 11:18:22.186763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.213 [2024-10-09 11:18:22.186802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.213 [2024-10-09 11:18:22.186815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.214 [2024-10-09 11:18:22.187054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.214 [2024-10-09 11:18:22.187279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.214 [2024-10-09 11:18:22.187288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.214 [2024-10-09 11:18:22.187296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.214 [2024-10-09 11:18:22.190858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.214 [2024-10-09 11:18:22.199853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.214 [2024-10-09 11:18:22.200513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.214 [2024-10-09 11:18:22.200552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.214 [2024-10-09 11:18:22.200565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.214 [2024-10-09 11:18:22.200804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.214 [2024-10-09 11:18:22.201028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.214 [2024-10-09 11:18:22.201038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.214 [2024-10-09 11:18:22.201046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.214 [2024-10-09 11:18:22.204612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.475 [2024-10-09 11:18:22.213632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.475 [2024-10-09 11:18:22.214284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.475 [2024-10-09 11:18:22.214324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.475 [2024-10-09 11:18:22.214336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.475 [2024-10-09 11:18:22.214584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.475 [2024-10-09 11:18:22.214809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.475 [2024-10-09 11:18:22.214819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.475 [2024-10-09 11:18:22.214828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.475 [2024-10-09 11:18:22.218387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.475 [2024-10-09 11:18:22.227410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.475 [2024-10-09 11:18:22.227965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.475 [2024-10-09 11:18:22.227986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.475 [2024-10-09 11:18:22.227995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.475 [2024-10-09 11:18:22.228214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.475 [2024-10-09 11:18:22.228440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.475 [2024-10-09 11:18:22.228449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.475 [2024-10-09 11:18:22.228457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.475 [2024-10-09 11:18:22.232018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.475 [2024-10-09 11:18:22.241233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.475 [2024-10-09 11:18:22.241874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.475 [2024-10-09 11:18:22.241913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.475 [2024-10-09 11:18:22.241925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.475 [2024-10-09 11:18:22.242164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.475 [2024-10-09 11:18:22.242388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.475 [2024-10-09 11:18:22.242398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.475 [2024-10-09 11:18:22.242407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.475 [2024-10-09 11:18:22.245971] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.475 [2024-10-09 11:18:22.255186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.475 [2024-10-09 11:18:22.255823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.475 [2024-10-09 11:18:22.255863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.475 [2024-10-09 11:18:22.255875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.475 [2024-10-09 11:18:22.256114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.475 [2024-10-09 11:18:22.256339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.475 [2024-10-09 11:18:22.256349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.475 [2024-10-09 11:18:22.256358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.475 [2024-10-09 11:18:22.259921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.475 [2024-10-09 11:18:22.269130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.269838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.269878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.269891] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.270130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.270355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.270365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.270374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.273940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.282943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.283521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.283541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.283550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.283771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.283992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.284001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.284010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.287556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.296758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.297412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.297451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.297472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.297713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.297938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.297949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.297957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.301514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.310741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.311320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.311339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.311348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.311574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.311795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.311805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.311813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.315360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.324628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.325181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.325199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.325214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.325434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.325660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.325670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.325678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.329221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.338619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.339061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.339079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.339087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.339308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.339534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.339543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.339551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.343100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.352514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.353132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.353171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.353183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.353422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.353655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.353667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.353675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.357230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.366439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.367074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.367113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.367125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.367364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.367597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.367613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.367622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.371206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.380215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.380745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.380765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.380775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.380995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.381216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.381225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.381233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.384785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.393992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.394556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.394574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.394582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.394801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.476 [2024-10-09 11:18:22.395022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.476 [2024-10-09 11:18:22.395031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.476 [2024-10-09 11:18:22.395039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.476 [2024-10-09 11:18:22.398583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.476 [2024-10-09 11:18:22.407802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.476 [2024-10-09 11:18:22.408494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.476 [2024-10-09 11:18:22.408533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.476 [2024-10-09 11:18:22.408547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.476 [2024-10-09 11:18:22.408788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.477 [2024-10-09 11:18:22.409012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.477 [2024-10-09 11:18:22.409022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.477 [2024-10-09 11:18:22.409030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.477 [2024-10-09 11:18:22.412593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.477 [2024-10-09 11:18:22.421617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.477 [2024-10-09 11:18:22.422199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.477 [2024-10-09 11:18:22.422218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.477 [2024-10-09 11:18:22.422228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.477 [2024-10-09 11:18:22.422448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.477 [2024-10-09 11:18:22.422674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.477 [2024-10-09 11:18:22.422684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.477 [2024-10-09 11:18:22.422692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.477 [2024-10-09 11:18:22.426233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.477 [2024-10-09 11:18:22.435452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.477 [2024-10-09 11:18:22.436066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.477 [2024-10-09 11:18:22.436105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.477 [2024-10-09 11:18:22.436118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.477 [2024-10-09 11:18:22.436357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.477 [2024-10-09 11:18:22.436589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.477 [2024-10-09 11:18:22.436601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.477 [2024-10-09 11:18:22.436609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.477 [2024-10-09 11:18:22.440165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.477 [2024-10-09 11:18:22.449378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.477 [2024-10-09 11:18:22.449924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.477 [2024-10-09 11:18:22.449944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.477 [2024-10-09 11:18:22.449953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.477 [2024-10-09 11:18:22.450173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.477 [2024-10-09 11:18:22.450393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.477 [2024-10-09 11:18:22.450402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.477 [2024-10-09 11:18:22.450410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.477 [2024-10-09 11:18:22.453953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.477 [2024-10-09 11:18:22.463155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.477 [2024-10-09 11:18:22.463717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.477 [2024-10-09 11:18:22.463735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.477 [2024-10-09 11:18:22.463744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.477 [2024-10-09 11:18:22.463969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.477 [2024-10-09 11:18:22.464189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.477 [2024-10-09 11:18:22.464198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.477 [2024-10-09 11:18:22.464206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.477 [2024-10-09 11:18:22.467748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.738 [2024-10-09 11:18:22.476957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.738 [2024-10-09 11:18:22.477601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.738 [2024-10-09 11:18:22.477640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.738 [2024-10-09 11:18:22.477653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.738 [2024-10-09 11:18:22.477892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.738 [2024-10-09 11:18:22.478117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.738 [2024-10-09 11:18:22.478127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.738 [2024-10-09 11:18:22.478136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.738 [2024-10-09 11:18:22.481701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.738 [2024-10-09 11:18:22.490911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.738 [2024-10-09 11:18:22.491452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.738 [2024-10-09 11:18:22.491478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.738 [2024-10-09 11:18:22.491487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.491708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.491929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.491938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.491946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.495496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.504702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.505220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.505237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.505246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.505472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.505692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.505701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.505714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.509270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.518484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.519045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.519064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.519072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.519292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.519526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.519537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.519544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.523093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.532334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.532880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.532898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.532906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.533126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.533347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.533356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.533364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.536910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.546115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.546768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.546808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.546820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.547059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.547284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.547294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.547302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.550861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.560069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.560769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.560808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.560820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.561059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.561284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.561294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.561302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.564862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.573869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.574547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.574586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.574600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.574840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.575065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.575076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.575084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.578640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.587844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.588384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.588404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.588412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.588639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.588859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.588868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.588876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.592417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.601625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.602315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.602354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.602366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.602614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.602844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.602854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.602862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.606412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.615430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.616016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.616036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.616044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.616265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.616491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.616501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.616508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.620073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.739 [2024-10-09 11:18:22.629289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.739 [2024-10-09 11:18:22.629904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.739 [2024-10-09 11:18:22.629922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.739 [2024-10-09 11:18:22.629930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.739 [2024-10-09 11:18:22.630150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.739 [2024-10-09 11:18:22.630370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.739 [2024-10-09 11:18:22.630380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.739 [2024-10-09 11:18:22.630387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.739 [2024-10-09 11:18:22.633934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.643146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.644211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.644236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.644246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.644480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.644703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.644713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.644721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.648270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.657078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.657761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.657799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.657811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.658050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.658275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.658285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.658294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.661855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.670866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.671505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.671544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.671558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.671800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.672025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.672034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.672043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.675602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.684813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.685254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.685275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.685284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.685516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.685739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.685749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.685757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.689300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.698717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.699279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.699298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.699311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.699536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.699757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.699767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.699775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.703317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.712536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.713051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.713068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.713077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.713296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.713522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.713532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.713540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.717075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 [2024-10-09 11:18:22.726509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:02.740 [2024-10-09 11:18:22.727064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.740 [2024-10-09 11:18:22.727081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:02.740 [2024-10-09 11:18:22.727090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:02.740 [2024-10-09 11:18:22.727309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:02.740 [2024-10-09 11:18:22.727535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:02.740 [2024-10-09 11:18:22.727544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:02.740 [2024-10-09 11:18:22.727552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:02.740 [2024-10-09 11:18:22.731089] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:02.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2124911 Killed "${NVMF_APP[@]}" "$@" 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:03.002 [2024-10-09 11:18:22.740310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.002 [2024-10-09 11:18:22.740944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.002 [2024-10-09 11:18:22.740984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.002 [2024-10-09 11:18:22.740996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.002 [2024-10-09 11:18:22.741236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.002 [2024-10-09 11:18:22.741460] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.002 [2024-10-09 11:18:22.741478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.002 [2024-10-09 11:18:22.741487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.002 [2024-10-09 11:18:22.745043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2126616 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2126616 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2126616 ']' 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:03.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:03.002 11:18:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.002 [2024-10-09 11:18:22.754254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.002 [2024-10-09 11:18:22.754812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.002 [2024-10-09 11:18:22.754832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.002 [2024-10-09 11:18:22.754841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.002 [2024-10-09 11:18:22.755061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.002 [2024-10-09 11:18:22.755282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.002 [2024-10-09 11:18:22.755291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.002 [2024-10-09 11:18:22.755298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.002 [2024-10-09 11:18:22.758843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.002 [2024-10-09 11:18:22.768034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.002 [2024-10-09 11:18:22.768765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.002 [2024-10-09 11:18:22.768805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.002 [2024-10-09 11:18:22.768817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.002 [2024-10-09 11:18:22.769062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.002 [2024-10-09 11:18:22.769293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.002 [2024-10-09 11:18:22.769304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.002 [2024-10-09 11:18:22.769313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.002 [2024-10-09 11:18:22.772874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.002 [2024-10-09 11:18:22.781871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.002 [2024-10-09 11:18:22.782409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.002 [2024-10-09 11:18:22.782428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.002 [2024-10-09 11:18:22.782438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.002 [2024-10-09 11:18:22.782664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.002 [2024-10-09 11:18:22.782886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.002 [2024-10-09 11:18:22.782895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.002 [2024-10-09 11:18:22.782903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.002 [2024-10-09 11:18:22.786446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.002 [2024-10-09 11:18:22.795658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.002 [2024-10-09 11:18:22.796300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.002 [2024-10-09 11:18:22.796318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.796327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.796552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.796773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.796782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.796789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.800336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.800810] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:38:03.003 [2024-10-09 11:18:22.800857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:03.003 [2024-10-09 11:18:22.809547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.810104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.810121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.810130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.810349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.810574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.810589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.810597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.814137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.823355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.823843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.823883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.823896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.824137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.824362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.824373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.824381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.827946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.837164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.837882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.837921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.837934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.838175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.838399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.838409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.838418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.841976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.850979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.851577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.851616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.851629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.851871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.852096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.852107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.852115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.855679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.864892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.865544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.865583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.865595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.865834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.866060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.866070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.866079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.869643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.878857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.879400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.879420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.879429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.879656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.879878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.879887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.879895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.883435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.892637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.893165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.893182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.893190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.893410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.893635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.893645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.893653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.897193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.906599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.907163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.907180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.907189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.907412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.907639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.907649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.907656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.911206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.920415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.920987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.921004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.921013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.921234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.921453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.921462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.921474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.003 [2024-10-09 11:18:22.925014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.003 [2024-10-09 11:18:22.934213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.003 [2024-10-09 11:18:22.934848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.003 [2024-10-09 11:18:22.934887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.003 [2024-10-09 11:18:22.934899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.003 [2024-10-09 11:18:22.935139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.003 [2024-10-09 11:18:22.935364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.003 [2024-10-09 11:18:22.935373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.003 [2024-10-09 11:18:22.935381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.004 [2024-10-09 11:18:22.938713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:03.004 [2024-10-09 11:18:22.938944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.004 [2024-10-09 11:18:22.948053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.004 [2024-10-09 11:18:22.948757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.004 [2024-10-09 11:18:22.948796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.004 [2024-10-09 11:18:22.948810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.004 [2024-10-09 11:18:22.949050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.004 [2024-10-09 11:18:22.949275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.004 [2024-10-09 11:18:22.949289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.004 [2024-10-09 11:18:22.949298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.004 [2024-10-09 11:18:22.952859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.004 [2024-10-09 11:18:22.961859] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.004 [2024-10-09 11:18:22.962518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.004 [2024-10-09 11:18:22.962558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.004 [2024-10-09 11:18:22.962571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.004 [2024-10-09 11:18:22.962810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.004 [2024-10-09 11:18:22.963035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.004 [2024-10-09 11:18:22.963045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.004 [2024-10-09 11:18:22.963054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.004 [2024-10-09 11:18:22.966615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.004 [2024-10-09 11:18:22.975829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.004 [2024-10-09 11:18:22.976491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.004 [2024-10-09 11:18:22.976530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.004 [2024-10-09 11:18:22.976543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.004 [2024-10-09 11:18:22.976785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.004 [2024-10-09 11:18:22.977009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.004 [2024-10-09 11:18:22.977019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.004 [2024-10-09 11:18:22.977027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.004 [2024-10-09 11:18:22.980586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.004 [2024-10-09 11:18:22.984650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:03.004 [2024-10-09 11:18:22.989794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.004 [2024-10-09 11:18:22.990502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.004 [2024-10-09 11:18:22.990542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.004 [2024-10-09 11:18:22.990553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.004 [2024-10-09 11:18:22.990793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.004 [2024-10-09 11:18:22.991018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.004 [2024-10-09 11:18:22.991028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.004 [2024-10-09 11:18:22.991036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.004 [2024-10-09 11:18:22.994598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.004 [2024-10-09 11:18:23.000323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:03.004 [2024-10-09 11:18:23.000344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:03.004 [2024-10-09 11:18:23.000350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:03.004 [2024-10-09 11:18:23.000356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:03.004 [2024-10-09 11:18:23.000361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:03.004 [2024-10-09 11:18:23.001439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:03.004 [2024-10-09 11:18:23.001600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:03.004 [2024-10-09 11:18:23.001696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:03.265 [2024-10-09 11:18:23.003612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.265 [2024-10-09 11:18:23.004101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.265 [2024-10-09 11:18:23.004121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.265 [2024-10-09 11:18:23.004131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.265 [2024-10-09 11:18:23.004352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.265 [2024-10-09 11:18:23.004578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.265 [2024-10-09 11:18:23.004588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.265 [2024-10-09 11:18:23.004595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.265 [2024-10-09 11:18:23.008154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.265 [2024-10-09 11:18:23.017415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.265 [2024-10-09 11:18:23.018123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.265 [2024-10-09 11:18:23.018166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.265 [2024-10-09 11:18:23.018177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.265 [2024-10-09 11:18:23.018418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.265 [2024-10-09 11:18:23.018650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.265 [2024-10-09 11:18:23.018661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.265 [2024-10-09 11:18:23.018669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.265 [2024-10-09 11:18:23.022240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.265 [2024-10-09 11:18:23.031239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.265 [2024-10-09 11:18:23.031899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.265 [2024-10-09 11:18:23.031939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.265 [2024-10-09 11:18:23.031951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.265 [2024-10-09 11:18:23.032191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.265 [2024-10-09 11:18:23.032421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.265 [2024-10-09 11:18:23.032431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.265 [2024-10-09 11:18:23.032439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.265 [2024-10-09 11:18:23.036005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.265 [2024-10-09 11:18:23.045214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.265 [2024-10-09 11:18:23.045881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.265 [2024-10-09 11:18:23.045921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.265 [2024-10-09 11:18:23.045933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.265 [2024-10-09 11:18:23.046172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.265 [2024-10-09 11:18:23.046396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.265 [2024-10-09 11:18:23.046406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.265 [2024-10-09 11:18:23.046415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.049967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.059176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.059738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.059758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.059766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.059985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.060205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.060214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.060222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.063765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.072965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.073520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.073559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.073571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.073810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.074035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.074044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.074052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.077616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.086817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.087463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.087510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.087523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.087763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.087987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.087997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.088005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.091559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.100758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.101423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.101462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.101481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.101720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.101943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.101954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.101961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.105516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 4902.67 IOPS, 19.15 MiB/s [2024-10-09T09:18:23.268Z] [2024-10-09 11:18:23.116188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.116736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.116774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.116787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.117028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.117252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.117262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.117269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.120842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.130054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.130509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.130536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.130549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.130775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.130996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.131005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.131012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.134564] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.143979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.144572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.144611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.144624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.144867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.145091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.145102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.145110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.148668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.157923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.158473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.158513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.158525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.158763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.158987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.158998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.159005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.162568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.171778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.172428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.172473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.172487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.172726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.172950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.172965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.172973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.176528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.185734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.186417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.186456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.186477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.186717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.186941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.266 [2024-10-09 11:18:23.186951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.266 [2024-10-09 11:18:23.186959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.266 [2024-10-09 11:18:23.190512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.266 [2024-10-09 11:18:23.199511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.266 [2024-10-09 11:18:23.200170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.266 [2024-10-09 11:18:23.200209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.266 [2024-10-09 11:18:23.200220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.266 [2024-10-09 11:18:23.200459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.266 [2024-10-09 11:18:23.200691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.267 [2024-10-09 11:18:23.200700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.267 [2024-10-09 11:18:23.200708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.267 [2024-10-09 11:18:23.204259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.267 [2024-10-09 11:18:23.213485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.267 [2024-10-09 11:18:23.214011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.267 [2024-10-09 11:18:23.214050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.267 [2024-10-09 11:18:23.214061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.267 [2024-10-09 11:18:23.214299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.267 [2024-10-09 11:18:23.214532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.267 [2024-10-09 11:18:23.214543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.267 [2024-10-09 11:18:23.214551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.267 [2024-10-09 11:18:23.218100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.267 [2024-10-09 11:18:23.227325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.267 [2024-10-09 11:18:23.228019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.267 [2024-10-09 11:18:23.228058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.267 [2024-10-09 11:18:23.228070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.267 [2024-10-09 11:18:23.228308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.267 [2024-10-09 11:18:23.228539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.267 [2024-10-09 11:18:23.228551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.267 [2024-10-09 11:18:23.228558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.267 [2024-10-09 11:18:23.232103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.267 [2024-10-09 11:18:23.241102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.267 [2024-10-09 11:18:23.241776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.267 [2024-10-09 11:18:23.241815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.267 [2024-10-09 11:18:23.241826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.267 [2024-10-09 11:18:23.242065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.267 [2024-10-09 11:18:23.242289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.267 [2024-10-09 11:18:23.242298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.267 [2024-10-09 11:18:23.242306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.267 [2024-10-09 11:18:23.245869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.267 [2024-10-09 11:18:23.255074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.267 [2024-10-09 11:18:23.255763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.267 [2024-10-09 11:18:23.255802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.267 [2024-10-09 11:18:23.255814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.267 [2024-10-09 11:18:23.256053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.267 [2024-10-09 11:18:23.256276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.267 [2024-10-09 11:18:23.256286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.267 [2024-10-09 11:18:23.256294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.267 [2024-10-09 11:18:23.259846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.528 [2024-10-09 11:18:23.268845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.528 [2024-10-09 11:18:23.269433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.528 [2024-10-09 11:18:23.269453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.528 [2024-10-09 11:18:23.269461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.528 [2024-10-09 11:18:23.269693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.528 [2024-10-09 11:18:23.269919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.528 [2024-10-09 11:18:23.269930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.528 [2024-10-09 11:18:23.269938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.273480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.282677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.283322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.283361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.283372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.283619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.283844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.283854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.283862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.287410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.296616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.297311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.297350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.297362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.297609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.297834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.297844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.297852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.301400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.310400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.310938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.310977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.310988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.311227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.311450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.311460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.311485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.315031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.324246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.324906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.324946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.324957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.325197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.325420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.325430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.325437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.328991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.338402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.339116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.339155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.339167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.339406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.339638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.339649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.339656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.343200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.352198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.352790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.352829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.352842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.353082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.353307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.353317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.353324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.356876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.366104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.366577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.366620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.366633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.366874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.367098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.367108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.367116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.370680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.379888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.380472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.380492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.380500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.380721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.380940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.380949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.380956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.384503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.393700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.394228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.394245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.394253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.394525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.394747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.394756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.394763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.398306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.407509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.408047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.408086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.408099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.408339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.529 [2024-10-09 11:18:23.408586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.529 [2024-10-09 11:18:23.408598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.529 [2024-10-09 11:18:23.408606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.529 [2024-10-09 11:18:23.412154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.529 [2024-10-09 11:18:23.421372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.529 [2024-10-09 11:18:23.421966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.529 [2024-10-09 11:18:23.421984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.529 [2024-10-09 11:18:23.421993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.529 [2024-10-09 11:18:23.422213] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.422432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.422439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.422447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.425989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.435191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.435717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.435755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.435767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.436005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.436228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.436237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.436246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.439809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.449017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.449751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.449789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.449801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.450040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.450263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.450272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.450279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.453841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.462904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.463543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.463582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.463594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.463837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.464061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.464070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.464077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.467639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.476855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.477555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.477594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.477605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.477844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.478068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.478077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.478084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.481645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.490642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.491232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.491251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.491259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.491484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.491704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.491711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.491719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.495256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.504453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.504953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.504991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.505007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.505247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.505479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.505489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.505497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.509063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.530 [2024-10-09 11:18:23.518277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.530 [2024-10-09 11:18:23.518935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.530 [2024-10-09 11:18:23.518974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.530 [2024-10-09 11:18:23.518986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.530 [2024-10-09 11:18:23.519225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.530 [2024-10-09 11:18:23.519449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.530 [2024-10-09 11:18:23.519458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.530 [2024-10-09 11:18:23.519482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.530 [2024-10-09 11:18:23.523032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.791 [2024-10-09 11:18:23.532238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.791 [2024-10-09 11:18:23.532547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.791 [2024-10-09 11:18:23.532572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.791 [2024-10-09 11:18:23.532582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.791 [2024-10-09 11:18:23.532806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.791 [2024-10-09 11:18:23.533027] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.791 [2024-10-09 11:18:23.533036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.791 [2024-10-09 11:18:23.533044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.791 [2024-10-09 11:18:23.536596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.791 [2024-10-09 11:18:23.546210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.791 [2024-10-09 11:18:23.546770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.791 [2024-10-09 11:18:23.546788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.791 [2024-10-09 11:18:23.546797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.791 [2024-10-09 11:18:23.547017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.791 [2024-10-09 11:18:23.547236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.791 [2024-10-09 11:18:23.547249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.791 [2024-10-09 11:18:23.547258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.550806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.560008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.560603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.560641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.560655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.560896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.561120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.561129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.561137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.564692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.573941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.574567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.574605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.574618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.574860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.575084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.575092] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.575100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.578659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.587861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.588449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.588494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.588505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.588744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.588967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.588976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.588984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.592534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.792 [2024-10-09 11:18:23.601749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.602334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.602353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.602362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.602587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.602808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.602815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.602822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.606364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.615587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.616127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.616143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.616151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.616370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.616594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.616602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.616609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.620158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.629369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.629924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.629941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.629948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.630167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.630385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.630394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.630401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.633943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.792 [2024-10-09 11:18:23.642473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.792 [2024-10-09 11:18:23.643143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.643782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.643821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.643832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.644071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.644294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.644303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.644310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.647869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.792 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.792 [2024-10-09 11:18:23.656955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.657571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.657609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.657622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.657864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.792 [2024-10-09 11:18:23.658087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.792 [2024-10-09 11:18:23.658096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.792 [2024-10-09 11:18:23.658104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.792 [2024-10-09 11:18:23.661664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.792 [2024-10-09 11:18:23.670869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.792 [2024-10-09 11:18:23.671475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.792 [2024-10-09 11:18:23.671493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.792 [2024-10-09 11:18:23.671501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.792 [2024-10-09 11:18:23.671721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.793 [2024-10-09 11:18:23.671940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.793 [2024-10-09 11:18:23.671953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.793 [2024-10-09 11:18:23.671960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.793 [2024-10-09 11:18:23.675507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.793 Malloc0 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.793 [2024-10-09 11:18:23.684707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.793 [2024-10-09 11:18:23.685271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.793 [2024-10-09 11:18:23.685309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.793 [2024-10-09 11:18:23.685321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.793 [2024-10-09 11:18:23.685568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.793 [2024-10-09 11:18:23.685792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.793 [2024-10-09 11:18:23.685800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.793 [2024-10-09 11:18:23.685808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.793 [2024-10-09 11:18:23.689357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.793 [2024-10-09 11:18:23.698558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.793 [2024-10-09 11:18:23.699241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.793 [2024-10-09 11:18:23.699279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75cc40 with addr=10.0.0.2, port=4420 00:38:03.793 [2024-10-09 11:18:23.699290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75cc40 is same with the state(6) to be set 00:38:03.793 [2024-10-09 11:18:23.699537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75cc40 (9): Bad file descriptor 00:38:03.793 [2024-10-09 11:18:23.699761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:38:03.793 [2024-10-09 11:18:23.699770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:38:03.793 [2024-10-09 11:18:23.699777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.793 [2024-10-09 11:18:23.703325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:03.793 [2024-10-09 11:18:23.709205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.793 [2024-10-09 11:18:23.712541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.793 11:18:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2125351 00:38:03.793 [2024-10-09 11:18:23.748917] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:05.301 4823.29 IOPS, 18.84 MiB/s [2024-10-09T09:18:26.243Z] 5666.38 IOPS, 22.13 MiB/s [2024-10-09T09:18:27.183Z] 6275.78 IOPS, 24.51 MiB/s [2024-10-09T09:18:28.123Z] 6770.40 IOPS, 26.45 MiB/s [2024-10-09T09:18:29.504Z] 7165.55 IOPS, 27.99 MiB/s [2024-10-09T09:18:30.446Z] 7500.08 IOPS, 29.30 MiB/s [2024-10-09T09:18:31.387Z] 7777.00 IOPS, 30.38 MiB/s [2024-10-09T09:18:32.329Z] 8015.50 IOPS, 31.31 MiB/s [2024-10-09T09:18:32.329Z] 8229.67 IOPS, 32.15 MiB/s 00:38:12.327 Latency(us) 00:38:12.327 [2024-10-09T09:18:32.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.327 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:12.327 Verification LBA range: start 0x0 length 0x4000 00:38:12.327 Nvme1n1 : 15.01 8232.05 32.16 9719.81 0.00 7105.01 790.32 23429.17 00:38:12.327 [2024-10-09T09:18:32.329Z] =================================================================================================================== 00:38:12.327 [2024-10-09T09:18:32.329Z] Total : 8232.05 32.16 9719.81 0.00 7105.01 790.32 23429.17 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:12.327 rmmod nvme_tcp 00:38:12.327 rmmod nvme_fabrics 00:38:12.327 rmmod nvme_keyring 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2126616 ']' 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2126616 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2126616 ']' 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2126616 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:12.327 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2126616 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2126616' 00:38:12.587 killing process with pid 2126616 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2126616 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2126616 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.587 11:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.134 00:38:15.134 real 0m28.133s 00:38:15.134 user 1m2.948s 00:38:15.134 sys 0m7.315s 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:15.134 ************************************ 00:38:15.134 END TEST nvmf_bdevperf 00:38:15.134 ************************************ 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:15.134 ************************************ 00:38:15.134 START TEST nvmf_target_disconnect 00:38:15.134 ************************************ 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:15.134 * Looking for test storage... 00:38:15.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.134 --rc genhtml_branch_coverage=1 00:38:15.134 --rc genhtml_function_coverage=1 00:38:15.134 --rc genhtml_legend=1 00:38:15.134 --rc geninfo_all_blocks=1 00:38:15.134 --rc geninfo_unexecuted_blocks=1 00:38:15.134 00:38:15.134 ' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.134 --rc genhtml_branch_coverage=1 00:38:15.134 --rc genhtml_function_coverage=1 00:38:15.134 --rc genhtml_legend=1 00:38:15.134 --rc geninfo_all_blocks=1 00:38:15.134 --rc geninfo_unexecuted_blocks=1 00:38:15.134 00:38:15.134 ' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.134 --rc genhtml_branch_coverage=1 00:38:15.134 --rc genhtml_function_coverage=1 00:38:15.134 --rc genhtml_legend=1 00:38:15.134 --rc geninfo_all_blocks=1 00:38:15.134 --rc geninfo_unexecuted_blocks=1 00:38:15.134 00:38:15.134 ' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.134 --rc genhtml_branch_coverage=1 00:38:15.134 --rc genhtml_function_coverage=1 00:38:15.134 --rc genhtml_legend=1 00:38:15.134 --rc geninfo_all_blocks=1 00:38:15.134 --rc geninfo_unexecuted_blocks=1 00:38:15.134 00:38:15.134 ' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:15.134 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:15.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.135 11:18:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:23.269 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:23.269 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:23.269 Found net devices under 0000:31:00.0: cvl_0_0 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:23.269 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:23.270 Found net devices under 0000:31:00.1: cvl_0_1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.270 11:18:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:38:23.270 00:38:23.270 --- 10.0.0.2 ping statistics --- 00:38:23.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.270 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:38:23.270 00:38:23.270 --- 10.0.0.1 ping statistics --- 00:38:23.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.270 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:23.270 ************************************ 00:38:23.270 START TEST nvmf_target_disconnect_tc1 00:38:23.270 ************************************ 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:23.270 [2024-10-09 11:18:42.351691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:23.270 [2024-10-09 11:18:42.351754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ed240 with addr=10.0.0.2, port=4420 00:38:23.270 [2024-10-09 11:18:42.351779] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:23.270 [2024-10-09 11:18:42.351791] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:23.270 [2024-10-09 11:18:42.351800] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:23.270 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:23.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:23.270 Initializing NVMe Controllers 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:23.270 00:38:23.270 real 0m0.218s 00:38:23.270 user 0m0.049s 00:38:23.270 sys 0m0.068s 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:23.270 ************************************ 00:38:23.270 END TEST nvmf_target_disconnect_tc1 00:38:23.270 ************************************ 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:23.270 ************************************ 00:38:23.270 START TEST nvmf_target_disconnect_tc2 00:38:23.270 ************************************ 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2132718 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2132718 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2132718 ']' 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:23.270 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.271 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:23.271 11:18:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.271 [2024-10-09 11:18:42.523033] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:38:23.271 [2024-10-09 11:18:42.523084] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.271 [2024-10-09 11:18:42.660282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:23.271 [2024-10-09 11:18:42.710052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:23.271 [2024-10-09 11:18:42.730083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.271 [2024-10-09 11:18:42.730119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.271 [2024-10-09 11:18:42.730127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.271 [2024-10-09 11:18:42.730134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.271 [2024-10-09 11:18:42.730140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.271 [2024-10-09 11:18:42.731817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:23.271 [2024-10-09 11:18:42.731965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:23.271 [2024-10-09 11:18:42.732118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:23.271 [2024-10-09 11:18:42.732120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 Malloc0 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 [2024-10-09 11:18:43.425585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 [2024-10-09 11:18:43.465904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2132766 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:23.532 11:18:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:26.125 11:18:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2132718 00:38:26.125 11:18:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 [2024-10-09 11:18:45.499914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:26.125 [2024-10-09 11:18:45.500300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.500320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 [2024-10-09 11:18:45.500505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.500522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 [2024-10-09 11:18:45.500872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.500881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 [2024-10-09 11:18:45.501235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.501246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 [2024-10-09 11:18:45.501702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.501730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 [2024-10-09 11:18:45.501821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.125 [2024-10-09 11:18:45.501830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.125 qpair failed and we were unable to recover it. 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Write completed with error (sct=0, sc=8) 00:38:26.125 starting I/O failed 00:38:26.125 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Read completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 Write completed with error (sct=0, sc=8) 00:38:26.126 starting I/O failed 00:38:26.126 [2024-10-09 11:18:45.502087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:26.126 [2024-10-09 11:18:45.502479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.502503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f18000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.502865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.502904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f18000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.503245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.503258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f18000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.503776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.503815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f18000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.503996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.504008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.504170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.504179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.504504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.504513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.504875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.504883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.505051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.505060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.505355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.505363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.505577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.505586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.505961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.505970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.506151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.506160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.506497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.506507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.506891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.506900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.507076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.507085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.507259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.507269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.507473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.507482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.507787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.507796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.508087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.508096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.508391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.508401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.508790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.508800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.509091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.509099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.509386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.509395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.509708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.509718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.509946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.509955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.510269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.510278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.510592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.510601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.510777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.510787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.511070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.511083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.511406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.511415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.511617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.511627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.511902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.511912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.512187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.512196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.512490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.512499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.512786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.512796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.513094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.513104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.513433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.513443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.513740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.513749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.514049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.514058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.514373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.514382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.514728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.514737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.515028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.515037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.515343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.515352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.515430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.515438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.515740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.515750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.515964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.515973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.516150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.516158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.516457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.516473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.126 [2024-10-09 11:18:45.518649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.126 [2024-10-09 11:18:45.518660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.126 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.519001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.519010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.519314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.519322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.519645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.519655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.519985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.519993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.520291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.520301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.520611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.520619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.520960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.520968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.521299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.521308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.521583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.521592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.521922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.521930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.522229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.522237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.522519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.522527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.522824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.522832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.523160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.523168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.523496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.523505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.523768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.523776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.523951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.523959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.524250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.524258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.525118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.525136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.525387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.525398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.525694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.525702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.525985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.526002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.526277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.526286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.526609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.526617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.526796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.526804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.527079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.527087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.527384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.527392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.527575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.527584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.527863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.527872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.528173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.528183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.528492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.528500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.528823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.528832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.528875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.528882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.529339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.529354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.529539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.529547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.529777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.529786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.530071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.530080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.530405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.530418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.530732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.530741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.531040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.531050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.531505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.531520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.531691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.531698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.532049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.532056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.532215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.532221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.532569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.532577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.532915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.532922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.533210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.533217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.533532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.533540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.533858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.533865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.534162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.534169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.534484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.534491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.534682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.534690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.535003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.535011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.535318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.535325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.535643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.535651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.535944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.535950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.536264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.536271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.536589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.536597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.536962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.127 [2024-10-09 11:18:45.536969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.127 qpair failed and we were unable to recover it. 00:38:26.127 [2024-10-09 11:18:45.537249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.537258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.537662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.537669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.537982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.537989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.538308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.538316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.538568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.538575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.538884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.538890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.539178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.539185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.539490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.539496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.539706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.539714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.540011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.540018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.540276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.540283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.540548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.540555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.540735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.540742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.541053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.541060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.541367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.541373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.541695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.541702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.542002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.542009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.542319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.542325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.542715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.542722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.542892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.542899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.543197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.543204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.543504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.543511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.543879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.543886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.544221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.544228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.544476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.544483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.544764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.544771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.544933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.544940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.545203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.545210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.545554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.545562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.545850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.545857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.546030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.546038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.546220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.546227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.546500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.546508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.546719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.546726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.547039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.547046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.547355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.547361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.547658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.547665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.547947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.547954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.548253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.548260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.548474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.548481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.548758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.548767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.549061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.549067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.549409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.549416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.549710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.549718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.550038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.550044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.550313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.550321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.550479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.550486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.550809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.128 [2024-10-09 11:18:45.550816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.128 qpair failed and we were unable to recover it. 00:38:26.128 [2024-10-09 11:18:45.551114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.551121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.551343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.551351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.551554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.551561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.551866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.551873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.552177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.552183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.552490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.552497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.552808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.552816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.553083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.553090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.553281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.553288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.553492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.553499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.553780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.553786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.553938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.553945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.554222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.554229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.554551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.554558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.554859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.554866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.555160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.555167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.555472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.555479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.555776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.555783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.555955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.555962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.556311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.556318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.556628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.556635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.556965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.556972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.557096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.557104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.557387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.557394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.557683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.557690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.557986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.557992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.558287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.558294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.558588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.558595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.558877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.558884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.559185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.559197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.559481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.559488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.559803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.559810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.560134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.560143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.560469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.560476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.560824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.560831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.561040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.561047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.561229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.561235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.561510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.561517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.561774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.561781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.562154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.562161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.562438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.562445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.562734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.562741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.562903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.562910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.563104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.563111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.563380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.563386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.563608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.563615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.563918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.563925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.564213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.564221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.564537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.564544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.564821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.564828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.565128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.565134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.565421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.565429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.565759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.565766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.566071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.566079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.566447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.566454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.567253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.567269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.567583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.567591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.567911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.567918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.568111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.568119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.568473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.568482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.568862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.568869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.129 [2024-10-09 11:18:45.569177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.129 [2024-10-09 11:18:45.569184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.129 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.569491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.569499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.569784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.569791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.570010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.570018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.570185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.570192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.570497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.570504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.570607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.570613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.570876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.570884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.571202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.571209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.571541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.571549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.571749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.571757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.572049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.572057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.572352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.572359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.572680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.572688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.573010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.573017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.573312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.573319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.573502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.573511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.573820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.573827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.574124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.574131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.574453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.574460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.574831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.574838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.575201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.575210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.575400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.575406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.575610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.575617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.575941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.575947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.576137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.576145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.576293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.576299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.576399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.576406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.576507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.576514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.576840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.576847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.577184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.577191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.577507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.577514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.577815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.577823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.578148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.578155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.578484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.578491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.578787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.578795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.579077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.579085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.579473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.579480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.579824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.579833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.579995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.580002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.580271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.580278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.580486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.580494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.580845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.580851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.581149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.581156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.581475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.581483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.581680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.581687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.582004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.582012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.582336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.582343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.582632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.582639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.582926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.582933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.583167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.583174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.583473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.583481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.583836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.583843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.584126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.584133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.584440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.584447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.584724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.584731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.584934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.130 [2024-10-09 11:18:45.584940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.130 qpair failed and we were unable to recover it. 00:38:26.130 [2024-10-09 11:18:45.585101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.585109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.585406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.585412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.585618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.585625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.585947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.585954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.586259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.586266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.586576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.586583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.586882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.586896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.587178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.587185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.587506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.587513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.587830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.587837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.588164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.588170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.588453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.588460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.588770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.588777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.589071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.589078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.589397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.589404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.589757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.589765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.590070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.590077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.590406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.590413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.590748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.590755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.591047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.591054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.591372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.591379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.591631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.591640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.591963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.591969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.592159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.592166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.592494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.592501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.592790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.592797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.593114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.593121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.593285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.593293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.593611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.593618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.593840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.593847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.594187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.594193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.594519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.594526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.594819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.594827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.595125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.595133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.595422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.595429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.595745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.595753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.596061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.596067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.596364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.596371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.596693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.596700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.597014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.597021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.597354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.597360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.597569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.597576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.597896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.597904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.598071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.598079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.598395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.598402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.598579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.598586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.598777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.598784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.599104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.599110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.599491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.599498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.599764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.599771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.600109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.600116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.131 qpair failed and we were unable to recover it. 00:38:26.131 [2024-10-09 11:18:45.600423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.131 [2024-10-09 11:18:45.600430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.600796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.600803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.601111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.601118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.601437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.601444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.601832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.601839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.602064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.602072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.602270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.602277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.602568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.602576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.602877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.602884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.603194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.603201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.603495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.603504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.603810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.603817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.603967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.603974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.604243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.604255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.604583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.604591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.604860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.604867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.605178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.605185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.605509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.605516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.605835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.605841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.606143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.606150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.606443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.606449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.606641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.606648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.606988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.606995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.607385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.607393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.607700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.607707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.608027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.608034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.608325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.608332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.608495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.608504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.608720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.608726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.608907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.608915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.609129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.609136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.609443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.609450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.609748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.609755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.610051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.610057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.610374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.610381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.610699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.610706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.610916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.610923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.611225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.611232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.611378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.611385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.611759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.611767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.611939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.611946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.612352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.612359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.612656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.612663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.612962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.612969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.613277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.613284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.613477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.613484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.613903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.613910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.614164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.614171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.614341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.614348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.614686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.614693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.614985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.614993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.615212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.615218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.615588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.615595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.615752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.615759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.616091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.616098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.616471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.616478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.616700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.616707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.616914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.616921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.617285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.617292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.617606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.617613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.617933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.617940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.618242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.618249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.618622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.618629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.618939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.618947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.619253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.619259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.132 [2024-10-09 11:18:45.619580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.132 [2024-10-09 11:18:45.619587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.132 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.619908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.619915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.620207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.620214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.620538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.620545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.620841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.620847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.621017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.621025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.621184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.621191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.621385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.621391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.621672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.621679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.621999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.622006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.622316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.622323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.622518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.622524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.622795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.622802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.623122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.623129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.623320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.623327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.623668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.623674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.623992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.623999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.624307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.624313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.624627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.624634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.624954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.624961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.625285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.625292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.625580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.625587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.625910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.625916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.626237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.626245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.626537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.626544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.626844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.626893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.627207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.627215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.627350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.627358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.627682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.627689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.627977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.627985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.628276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.628282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.628549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.628556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.628727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.628734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.628925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.628932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.629211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.629217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.629610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.629617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.629943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.629950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.630257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.630264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.630575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.630582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.630901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.630908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.631191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.631198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.631351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.631359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.631512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.631519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.631800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.631807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.632122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.632129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.632516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.632523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.632716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.632722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.633015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.633022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.633328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.633335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.633641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.633648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.633939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.633946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.634137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.634144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.634438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.634444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.634737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.634744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.635055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.635062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.635355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.635362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.635677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.635684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.635994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.636002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.636295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.636302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.636609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.636616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.636901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.636907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.637211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.637218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.637529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.637537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.637835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.637843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.638138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.638144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.638437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.638446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.638744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.638751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.639045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.133 [2024-10-09 11:18:45.639052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.133 qpair failed and we were unable to recover it. 00:38:26.133 [2024-10-09 11:18:45.639377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.639384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.639689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.639697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.639987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.639994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.640304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.640311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.640615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.640622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.640924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.640930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.641220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.641226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.641407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.641414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.641787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.641794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.642088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.642095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.642266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.642274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.642607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.642614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.643022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.643028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.643344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.643350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.643690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.643697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.643992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.644000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.644346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.644353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.644551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.644559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.644871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.644878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.645186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.645193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.645512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.645519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.645820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.645827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.646211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.646219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.646521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.646528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.646889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.646896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.647200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.647207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.647506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.647513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.647736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.647743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.648072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.648078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.648304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.648311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.648474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.648480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.648763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.648770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.649119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.649126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.649434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.649442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.649816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.649823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.650102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.650115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.650372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.650379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.650790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.650799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.651132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.651139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.651428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.651436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.651746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.651754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.652045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.652051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.652362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.652369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.652759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.652766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.652934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.652941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.653207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.653215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.653427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.653434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.653526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.653533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.653723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.653730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.653940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.653947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.654243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.654249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.654528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.654536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.654737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.654744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.655034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.655042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.655353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.655360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.655532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.655540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.655904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.655910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.656106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.656113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.656293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.656300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.134 qpair failed and we were unable to recover it. 00:38:26.134 [2024-10-09 11:18:45.656582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.134 [2024-10-09 11:18:45.656589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.656908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.656915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.657229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.657235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.657521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.657529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.657826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.657833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.658146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.658153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.658436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.658443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.658655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.658663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.658978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.658986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.659277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.659284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.659590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.659597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.659890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.659898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.660204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.660210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.660513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.660520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.660824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.660830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.661129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.661136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.661198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.661205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.661349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.661356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.661531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.661540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.661813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.661820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.662128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.662135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.662460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.662479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.662813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.662820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.663112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.663118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.663415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.663423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.663754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.663762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.663931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.663939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.664245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.664252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.664539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.664546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.664859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.664865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.665169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.665176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.665491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.665498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.665806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.665813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.666117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.666124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.666413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.666420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.666794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.666801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.666982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.666989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.667285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.667292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.667602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.667609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.667933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.667940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.668267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.668274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.668563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.668570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.668894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.668901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.669201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.669209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.669520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.669528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.669838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.669845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.670137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.670144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.670452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.670459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.670673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.670680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.671008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.671015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.671298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.671310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.671584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.671591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.671879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.671895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.672205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.672212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.672393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.672400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.672754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.672761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.673080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.673086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.673390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.673396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.673561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.673570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.673788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.673795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.674107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.674113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.674409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.674417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.135 [2024-10-09 11:18:45.674709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.135 [2024-10-09 11:18:45.674717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.135 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.675028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.675035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.675342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.675349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.675661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.675668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.675989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.675996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.676295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.676301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.676640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.676647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.677003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.677011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.677311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.677318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.677621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.677628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.677972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.677979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.678146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.678153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.678523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.678530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.678725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.678732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.679098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.679105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.679416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.679424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.679723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.679730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.680030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.680037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.680397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.680404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.680767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.680774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.681086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.681093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.681402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.681408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.681716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.681731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.682089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.682097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.682429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.682436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.682716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.682724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.683017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.683024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.683331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.683338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.683633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.683640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.683934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.683942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.684249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.684255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.684537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.684544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.684862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.684869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.685186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.685193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.685500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.685507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.685804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.685812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.686122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.686130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.686443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.686450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.686753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.686760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.687070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.687077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.687320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.687327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.687659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.687667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.687987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.687994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.688206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.688213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.688496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.688504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.688814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.688821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.689109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.689116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.689303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.689309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.689632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.689639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.689813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.689821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.690130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.690137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.690458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.690468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.690755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.690763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.691053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.691060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.691274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.691281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.691593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.691600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.691795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.691802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.692125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.692131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.692314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.692321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.692580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.692594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.692801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.692808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.693078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.136 [2024-10-09 11:18:45.693084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.136 qpair failed and we were unable to recover it. 00:38:26.136 [2024-10-09 11:18:45.693416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.693423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.693733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.693740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.694060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.694067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.694357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.694365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.694684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.694691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.695094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.695100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.695411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.695418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.695721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.695728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.696047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.696054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.696256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.696262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.696569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.696576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.696864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.696871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.697193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.697199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.697513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.697520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.697837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.697845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.698127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.698133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.698455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.698462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.698763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.698770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.699064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.699070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.699264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.699271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.699610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.699616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.699948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.699954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.700262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.700269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.700532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.700539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.700866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.700873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.701154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.701162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.701471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.701479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.701794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.701801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.702109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.702117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.702479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.702486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.702788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.702795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.702994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.703001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.703291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.703298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.703617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.703624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.703945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.703952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.704257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.704263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.704562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.704569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.704832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.704839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.705132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.705140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.705433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.705440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.705722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.705729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.705885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.705893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.706199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.706206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.706517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.706524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.706847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.706854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.707015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.707023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.707346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.707353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.707632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.707640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.707818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.707826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.708009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.708015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.708327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.708333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.708624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.708631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.708937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.708943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.709253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.709260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.709596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.137 [2024-10-09 11:18:45.709606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.137 qpair failed and we were unable to recover it. 00:38:26.137 [2024-10-09 11:18:45.709893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.709901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.710200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.710207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.710506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.710514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.710693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.710701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.710978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.710985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.711281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.711288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.711597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.711605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.711891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.711899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.712188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.712196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.712508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.712515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.712694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.712702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.712994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.713001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.713319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.713326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.713639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.713646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.713968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.713975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.714158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.714165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.714378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.714386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.714662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.714669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.715037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.715044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.715351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.715358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.715586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.715593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.715802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.715809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.716136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.716144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.716335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.716342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.716551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.716559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.716921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.716927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.717173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.717182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.717456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.717463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.717878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.717886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.718068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.718075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.718371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.718378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.718694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.718702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.719010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.719016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.719202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.719209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.719547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.719554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.719731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.719738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.720053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.720060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.720384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.720391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.720577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.720584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.720918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.720925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.721208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.721216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.721525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.721541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.721871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.721878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.722190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.722197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.722510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.722517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.722843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.722849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.723136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.723145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.723452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.723459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.723745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.723752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.724074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.724081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.724358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.724373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.724673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.724680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.724987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.724995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.725308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.725316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.725496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.725504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.725786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.725793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.726099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.726106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.726440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.726446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.726747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.726754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.727055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.727061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.727345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.727352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.727631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.138 [2024-10-09 11:18:45.727638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.138 qpair failed and we were unable to recover it. 00:38:26.138 [2024-10-09 11:18:45.727880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.727887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.728073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.728080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.728397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.728404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.728765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.728772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.729069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.729077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.729390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.729397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.729690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.729697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.729911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.729918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.730197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.730204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.730510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.730517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.730825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.730832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.731143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.731150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.731456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.731463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.731781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.731788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.732091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.732099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.732406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.732413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.732700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.732707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.733026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.733034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.733336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.733343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.733652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.733659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.733880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.733888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.734196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.734205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.734505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.734512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.734680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.734688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.735008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.735014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.735307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.735314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.735622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.735629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.735925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.735933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.736296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.736303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.736473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.736481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.736781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.736789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.737081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.737088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.737475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.737482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.737776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.737783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.738086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.738092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.738401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.738408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.738596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.738604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.738998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.739005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.739293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.739300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.739610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.739617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.739770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.739777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.740050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.740057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.740245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.740252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.740549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.740557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.740858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.740868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.741218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.741225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.741523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.741530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.741839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.741845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.742144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.742151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.742450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.742458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.742817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.742824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.743133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.743140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.743451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.743458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.743756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.743763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.744084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.744091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.744380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.744387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.744671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.744678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.139 [2024-10-09 11:18:45.744870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.139 [2024-10-09 11:18:45.744877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.139 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.745240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.745247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.745557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.745565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.745873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.745880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.746157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.746165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.746413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.746420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.746725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.746733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.747018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.747025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.747177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.747185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.747492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.747500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.747791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.747798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.748103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.748110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.748420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.748427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.748738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.748745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.749058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.749065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.749348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.749355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.749673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.749680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.749989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.749996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.750306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.750313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.750604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.750611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.750838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.750845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.751139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.751146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.751449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.751457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.751653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.751660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.751977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.751984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.752297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.752304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.752516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.752524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.752707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.752717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.753060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.753066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.753263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.753270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.753473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.753480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.753768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.753776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.753950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.753957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.754254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.754261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.754565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.754573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.754861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.754868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.755177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.755184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.755493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.755500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.755812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.755819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.756130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.756138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.756446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.756453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.756754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.756762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.757072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.757079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.757407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.757413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.757759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.757765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.758079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.758086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.758375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.758382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.758684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.758691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.759007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.759014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.759301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.759308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.759587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.759595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.759777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.759784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.760112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.760120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.760318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.760325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.760657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.760664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.760963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.760971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.761151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.761158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.761450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.761457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.761754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.761761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.762065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.762072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.762387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.762393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.140 [2024-10-09 11:18:45.762691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.140 [2024-10-09 11:18:45.762708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.140 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.763018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.763026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.763203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.763210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.763513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.763521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.763835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.763842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.764148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.764156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.764440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.764451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.764823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.764830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.765131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.765139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.765327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.765334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.765533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.765540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.765804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.765811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.766097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.766103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.766297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.766304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.766644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.766651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.766941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.766948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.767124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.767132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.767436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.767444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.767648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.767655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.767926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.767932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.768254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.768262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.768648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.768655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.768940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.768948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.769294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.769302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.769598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.769606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.769924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.769931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.770095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.770102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.770329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.770336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.770681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.770688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.770996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.771003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.771364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.771371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.771664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.771671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.771873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.771880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.772167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.772174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.772453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.772459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.772820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.772827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.773121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.773128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.773458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.773470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.773765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.773772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.774083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.774089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.774359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.774366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.774680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.774687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.774874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.774881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.775234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.775242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.775552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.775559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.775735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.775742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.776016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.776025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.776334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.776341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.776634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.776641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.776852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.776860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.777182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.777189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.777239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.777245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.777530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.777537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.777868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.777875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.778181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.778188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.778514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.778521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.778700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.778707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.779009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.779016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.779180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.779187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.779461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.779471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.779693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.779700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.141 [2024-10-09 11:18:45.780009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.141 [2024-10-09 11:18:45.780016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.141 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.780219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.780226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.780529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.780536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.780742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.780749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.781013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.781020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.781199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.781207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.781576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.781583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.781905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.781912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.781988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.781995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.782289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.782296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.782595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.782603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.782926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.782933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.783244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.783252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.783562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.783570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.783974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.783981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.784155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.784161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.784340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.784347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.784633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.784640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.784983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.784990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.785288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.785295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.785614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.785621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.785803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.785810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.786145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.786152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.786472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.786479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.786775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.786782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.787078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.787087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.787405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.787411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.787602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.787609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.787936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.787943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.788148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.788155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.788299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.788307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.788521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.788528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.788826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.788833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.789128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.789135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.789433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.789440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.789744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.789758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.790061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.790069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.790361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.790368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.790684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.790690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.790883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.790890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.791251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.791259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.791438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.791446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.791761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.791767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.792060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.792067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.792390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.792399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.792700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.792708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.793056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.793063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.793367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.793374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.793566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.793574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.793879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.793887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.794191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.794199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.794509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.794516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.794849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.794856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.795220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.795226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.795546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.795553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.142 [2024-10-09 11:18:45.795621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.142 [2024-10-09 11:18:45.795628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.142 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.795900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.795906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.796188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.796195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.796506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.796513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.796823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.796831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.797126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.797133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.797425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.797432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.797640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.797648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.797862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.797869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.798169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.798177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.798382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.798391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.798680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.798688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.798983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.798991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.799275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.799281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.799442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.799450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.799822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.799829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.800158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.800165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.800480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.800487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.800753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.800761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.801094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.801101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.801299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.801306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.801580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.801586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.801921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.801928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.802246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.802253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.802432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.802439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.802644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.802651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.802935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.802942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.803159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.803167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.803491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.803499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.803793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.803800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.804006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.804013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.804342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.804348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.804667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.804674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.804998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.805005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.805291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.805298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.805598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.805606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.805992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.805998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.806326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.806333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.806647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.806654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.806955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.806962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.807274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.807282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.807600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.807607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.807914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.807920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.808227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.808235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.808456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.808463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.808751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.808758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.809075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.809082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.809238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.809245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.809536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.809549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.809863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.809870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.810060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.810070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.810230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.810237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.810476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.810483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.810671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.810678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.811029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.811036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.811343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.811350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.811672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.811679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.811968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.811976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.812287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.812295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.812612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.812619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.812778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.812785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.813060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.143 [2024-10-09 11:18:45.813067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.143 qpair failed and we were unable to recover it. 00:38:26.143 [2024-10-09 11:18:45.813432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.813438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.813621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.813629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.813842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.813849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.814116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.814123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.814422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.814430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.814765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.814772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.815098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.815105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.815463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.815474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.815778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.815786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.815990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.815997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.816175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.816182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.816433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.816440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.816721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.816728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.817030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.817037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.817328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.817335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.817507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.817514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.817806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.817814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.818112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.818118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.818422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.818429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.818730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.818737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.818956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.818963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.819132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.819139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.819452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.819459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.819543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.819549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.819859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.819866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.820144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.820151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.820354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.820361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.820675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.820682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.820934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.820943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.821251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.821259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.821453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.821461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.821752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.821759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.822078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.822085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.822254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.822262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.822582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.822589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.822900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.822907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.823098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.823105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.823412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.823419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.823586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.823594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.823981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.823988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.824279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.824286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.824582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.824589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.824761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.824768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.825140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.825148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.825319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.825326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.825675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.825682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.825984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.825991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.826318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.826325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.826609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.826616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.826915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.826922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.827125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.827133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.827436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.827442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.827754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.827761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.828069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.828076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.828223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.828229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.828505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.828512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.828868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.828875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.829179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.829186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.829496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.829503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.829798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.829804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.144 qpair failed and we were unable to recover it. 00:38:26.144 [2024-10-09 11:18:45.830052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.144 [2024-10-09 11:18:45.830060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.830225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.830233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.830587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.830595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.830900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.830907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.831207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.831213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.831522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.831529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.831718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.831725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.831943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.831950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.832245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.832255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.832571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.832578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.832871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.832878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.833188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.833195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.833578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.833585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.833858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.833865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.834077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.834084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.834389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.834396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.834671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.834679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.834998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.835004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.835310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.835318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.835624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.835630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.835942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.835949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.836249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.836256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.836484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.836491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.836802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.836810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.837117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.837125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.837413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.837420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.837623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.837630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.837840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.837847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.838148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.838155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.838468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.838476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.838794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.838801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.839096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.839104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.839380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.839387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.839697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.839704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.840005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.840012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.840307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.840314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.840609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.840616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.840780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.840787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.841098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.841105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.841405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.841413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.841754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.841761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.841902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.841909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.842088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.842095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.842377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.842384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.842682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.842689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.843002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.843009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.843319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.843326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.843629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.843636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.843950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.843959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.844242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.844249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.844451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.844458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.844769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.844776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.845091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.845098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.845400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.845408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.845586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.845595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.845888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.845895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.145 qpair failed and we were unable to recover it. 00:38:26.145 [2024-10-09 11:18:45.846187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.145 [2024-10-09 11:18:45.846194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.846476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.846484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.846766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.846772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.847045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.847052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.847247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.847254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.847549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.847556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.847736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.847744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.848025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.848033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.848217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.848224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.848419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.848426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.848601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.848609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.848872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.848878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.849048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.849055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.849374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.849381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.849705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.849712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.850011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.850019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.850211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.850218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.850380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.850388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.850664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.850671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.850969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.850976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.851297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.851303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.851588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.851595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.851921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.851927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.852238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.852246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.852433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.852440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.852750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.852757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.853060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.853068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.853369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.853376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.853683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.853690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.854007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.854013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.854299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.854306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.854611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.854618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.854822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.854831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.855127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.855134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.855350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.855357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.855713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.855720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.856006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.856013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.856220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.856226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.856416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.856422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.856740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.856747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.856945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.856953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.857255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.857262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.857553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.857560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.857887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.857894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.858156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.858164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.858486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.858494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.858803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.858810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.859099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.859107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.859395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.859402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.859564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.859571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.859823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.859830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.860140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.860146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.860454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.860461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.860751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.860758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.861051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.861058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.861374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.861381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.861678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.861686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.861988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.861995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.862292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.862299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.862603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.862612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.146 [2024-10-09 11:18:45.862987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.146 [2024-10-09 11:18:45.862994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.146 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.863279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.863286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.863633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.863641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.863975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.863983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.864171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.864177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.864493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.864500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.864824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.864831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.865129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.865136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.865408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.865414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.865739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.865747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.866057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.866065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.866366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.866374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.866678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.866685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.866987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.866994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.867192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.867200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.867483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.867491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.867779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.867786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.868101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.868108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.868401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.868408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.868729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.868736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.869094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.869100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.869390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.869405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.869711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.869718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.870023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.870031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.870333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.870340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.870634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.870641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.870926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.870934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.871263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.871269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.871580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.871588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.871905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.871913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.872200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.872207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.872534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.872541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.872831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.872838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.873023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.873031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.873352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.873360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.873692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.873700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.873998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.874004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.874310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.874317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.874626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.874633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.874931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.874940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.875244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.875252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.875528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.875535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.875849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.875856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.876165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.876172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.876485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.876493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.876805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.876812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.877122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.877129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.877323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.877331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.877547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.877554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.877734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.877742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.878058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.878065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.878360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.878367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.878520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.878527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.878796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.878808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.879115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.879122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.879499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.879507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.879732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.147 [2024-10-09 11:18:45.879739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.147 qpair failed and we were unable to recover it. 00:38:26.147 [2024-10-09 11:18:45.879937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.879944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.880215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.880222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.880429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.880436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.880738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.880745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.881052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.881059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.881364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.881371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.881685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.881692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.882000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.882007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.882301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.882308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.882610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.882618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.882979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.882986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.883202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.883209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.883482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.883489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.883786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.883792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.884095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.884103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.884391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.884398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.884707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.884722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.885028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.885035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.885338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.885345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.885668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.885675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.885976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.885984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.886264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.886271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.886564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.886574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.886893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.886899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.887277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.887284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.887586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.887593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.887887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.887895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.888185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.888192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.888501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.888509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.888838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.888845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.889131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.889139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.889299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.889306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.889573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.889580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.889909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.889916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.890227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.890234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.890527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.890534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.890837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.890845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.891154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.891161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.891451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.891458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.891762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.891769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.892078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.892085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.892393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.892400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.892647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.892655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.892841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.892849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.893141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.893148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.893443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.893450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.893752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.893759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.893919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.893927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.148 [2024-10-09 11:18:45.894270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.148 [2024-10-09 11:18:45.894277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.148 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.894595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.894602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.894930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.894937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.895218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.895226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.895377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.895385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.895658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.895665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.895888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.895895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.896200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.896207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.896505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.896512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.896839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.896845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.897241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.897249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.897544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.897552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.897853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.897859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.898054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.898062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.898366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.898375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.898692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.898699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.898999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.899006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.899314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.899320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.899629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.899636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.899937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.899945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.900251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.900258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.900462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.900471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.900757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.900763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.901158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.901164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.901461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.901471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.901751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.901757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.901980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.901987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.902319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.902326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.902631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.902638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.902956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.902963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.903276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.903283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.903559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.903566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.903739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.903747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.904041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.904049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.904357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.904365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.904577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.904587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.904916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.904923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.905231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.905239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.905448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.905455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.905638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.905645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.905912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.905919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.906232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.906241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.906561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.906568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.906898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.906905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.907210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.907217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.907509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.907516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.907803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.907810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.908099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.908106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.908406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.908414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.908700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.908708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.909012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.909020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.909347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.909354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.909630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.909637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.909930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.909938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.910230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.910239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.910536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.910543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.149 qpair failed and we were unable to recover it. 00:38:26.149 [2024-10-09 11:18:45.910876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.149 [2024-10-09 11:18:45.910883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.911207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.911214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.911523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.911530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.911745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.911752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.912050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.912057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.912370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.912377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.912763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.912770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.912968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.912975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.913238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.913245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.913543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.913550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.913869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.913876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.914233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.914239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.914515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.914522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.914838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.914845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.915049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.915056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.915371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.915379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.915668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.915675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.915971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.915979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.916128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.916135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.916406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.916413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.916706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.916713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.917028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.917035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.917337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.917344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.917642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.917650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.917858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.917865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.918148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.918155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.918458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.918470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.918781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.918788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.918968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.918974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.919262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.919270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.919578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.919585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.919883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.919891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.920185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.920192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.920474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.920481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.920792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.920798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.921039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.921046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.921373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.921379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.921685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.921692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.921998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.922007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.922197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.922204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.922526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.922533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.922752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.922759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.923082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.923089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.923406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.923413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.923738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.923745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.924060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.924066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.924356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.924363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.924655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.924662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.924881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.924887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.925194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.925200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.925497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.925503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.925851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.925858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.926161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.926168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.926398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.926405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.926702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.926709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.927026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.927033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.927214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.927222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.927515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.927522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.927840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.927848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.928153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.150 [2024-10-09 11:18:45.928160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.150 qpair failed and we were unable to recover it. 00:38:26.150 [2024-10-09 11:18:45.928354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.928362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.928755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.928763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.929049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.929058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.929208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.929216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.929438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.929446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.929754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.929762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.930071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.930079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.930379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.930386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.930680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.930687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.930863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.930871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.931142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.931151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.931327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.931336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.931521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.931529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.931738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.931746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.932037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.932045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.932381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.932388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.932643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.932651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.932847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.932855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.933106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.933116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.933320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.933327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.933631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.933639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.933806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.933814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.934096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.934104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.934433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.934441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.934814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.934821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.935110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.935118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.935407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.935414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.935694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.935702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.935974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.935982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.936298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.936306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.936595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.936603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.936789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.936797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.937130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.937138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.937449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.937456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.937773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.937781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.938114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.938123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.938412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.938420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.938732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.938740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.939049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.939056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.939379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.939386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.939690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.939698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.940011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.940019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.940325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.940333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.940533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.940541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.940853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.940861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.941168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.941175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.941311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.941317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.941527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.941534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.941898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.941905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.942187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.942195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.942508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.942516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.942815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.942823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.151 qpair failed and we were unable to recover it. 00:38:26.151 [2024-10-09 11:18:45.943121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.151 [2024-10-09 11:18:45.943127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.943394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.943400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.943438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.943445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.943753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.943760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.944146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.944154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.944457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.944468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.944675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.944684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.945005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.945013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.945326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.945333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.945525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.945532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.945852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.945859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.946188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.946195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.946489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.946496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.946823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.946830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.947153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.947160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.947477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.947485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.947666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.947673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.947980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.947987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.948311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.948319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.948568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.948576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.948907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.948914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.949204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.949212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.949531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.949539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.949816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.949823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.950040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.950046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.950379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.950386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.950706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.950713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.950900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.950906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.951301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.951308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.951508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.951516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.951842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.951849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.952171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.952178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.952470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.952477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.952851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.952859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.953170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.953177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.953481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.953488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.953690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.953697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.953996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.954003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.954164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.954171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.954443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.954459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.954560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.954567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.954851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.954858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.955183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.955190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.955503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.955510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.955715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.955722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.955905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.955911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.956176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.956185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.956398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.956406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.956713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.956720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.957013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.957019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.957324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.957332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.957617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.957624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 [2024-10-09 11:18:45.957803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.152 [2024-10-09 11:18:45.957809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:26.152 qpair failed and we were unable to recover it. 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Write completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Write completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Read completed with error (sct=0, sc=8) 00:38:26.152 starting I/O failed 00:38:26.152 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Read completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 Write completed with error (sct=0, sc=8) 00:38:26.153 starting I/O failed 00:38:26.153 [2024-10-09 11:18:45.958086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:26.153 [2024-10-09 11:18:45.958387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.958403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.958591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.958604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.958920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.958958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.959286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.959306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.959560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.959573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.959753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.959764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.960074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.960084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.960464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.960479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.960890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.960927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.961215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.961228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.961691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.961728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.962087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.962099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.962411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.962422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.962521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.962536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.962815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.962825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.963025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.963039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.963329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.963340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.963702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.963712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.964030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.964040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.964231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.964241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.964562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.964572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.964869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.964879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.965142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.965152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.965454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.965468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.965749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.965759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.966161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.966171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.966479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.966489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.966781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.966792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.967134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.967144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.967339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.967350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.967605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.967616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.967911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.967921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.968270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.968281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.968453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.968462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.968755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.968765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.969131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.969142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.969399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.969410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.969787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.969797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.969989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.969998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.970288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.970298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.970595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.970605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.970911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.970920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.971112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.971122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.971217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.971226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.971537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.971547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.971875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.971885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.972082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.972094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.972424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.972434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.972823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.972833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.973135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.973145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.973500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.973510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.973801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.973812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.153 qpair failed and we were unable to recover it. 00:38:26.153 [2024-10-09 11:18:45.974116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.153 [2024-10-09 11:18:45.974126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.974445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.974455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.974842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.974852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.975153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.975163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.975362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.975371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.975694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.975704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.976104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.976114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.976509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.976519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.976865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.976874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.977045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.977055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.977332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.977342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.977648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.977658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.978025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.978035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.978334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.978344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.978634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.978644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.978963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.978973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.979153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.979164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.979451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.979461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.979785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.979795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.980099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.980109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.980418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.980428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.980728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.980739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.980930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.980940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.981275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.981286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.981592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.981601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.981908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.981918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.982224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.982233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.982405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.982416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.982614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.982624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.982953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.982966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.983239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.983255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.983568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.983578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.983867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.983878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.984213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.984223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.984535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.984546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.984863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.984874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.985156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.985166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.985500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.985510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.985878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.985888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.986217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.986226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.986528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.986538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.986838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.986847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.987149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.987159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.987441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.987452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.987749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.987760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.988048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.988059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.988365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.988375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.988677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.988688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.988993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.989003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.989332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.989342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.989535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.989545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.989860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.989870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.990174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.990184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.990470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.990480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.990764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.154 [2024-10-09 11:18:45.990774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.154 qpair failed and we were unable to recover it. 00:38:26.154 [2024-10-09 11:18:45.991091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.991101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.991386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.991399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.991704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.991714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.991897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.991908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.992259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.992268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.992483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.992494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.992713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.992723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.992842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.992852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.993254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.993263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.993531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.993541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.993843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.993853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.994135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.994145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.994458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.994472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.994757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.994766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.995090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.995100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.995381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.995391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.995570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.995580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.995944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.995953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.996243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.996254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.996565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.996575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.996893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.996903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.997203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.997214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.997490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.997501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.997703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.997713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.998024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.998033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.998335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.998344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.998667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.998677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.998976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.998986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.999293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.999303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.999592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.999602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:45.999884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:45.999893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.000168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.000178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.000487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.000497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.000702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.000712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.001056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.001066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.001380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.001390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.001707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.001717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.002047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.002057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.002346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.155 [2024-10-09 11:18:46.002356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.155 qpair failed and we were unable to recover it. 00:38:26.155 [2024-10-09 11:18:46.002634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.002644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.002922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.002933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.003243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.003253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.003469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.003481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.003790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.003800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.003997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.004007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.004281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.004291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.004595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.004606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.004939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.004949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.005258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.005269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.005595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.005605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.005904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.005914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.006200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.006210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.006403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.006412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.006697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.006707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.006914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.006924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.007115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.007125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.007343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.007353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.007678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.007688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.008011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.008022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.008335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.008345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.008679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.008690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.008959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.008968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.009242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.009252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.009573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.009584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.009898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.009908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.010293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.010303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.010603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.010614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.010947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.010956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.011279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.011289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.011640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.011655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.011964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.011974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.012281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.012292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.012603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.012614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.012814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.012824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.013030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.013041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.013383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.013394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.013641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.013651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.013861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.156 [2024-10-09 11:18:46.013871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.156 qpair failed and we were unable to recover it. 00:38:26.156 [2024-10-09 11:18:46.013945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.013955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.014250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.014261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.014565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.014575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.014873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.014883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.015191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.015202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.015485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.015495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.015831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.015841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.016179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.016189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.016514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.016524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.016818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.016829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.017012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.017022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.017242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.017252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.017567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.017577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.017881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.017891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.018198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.018208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.018397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.018406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.018664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.018675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.018995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.019005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.019310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.019323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.019538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.019549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.019834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.019844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.020043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.020053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.020213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.020224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.020550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.020560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.020874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.020884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.021141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.021151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.021477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.021487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.021711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.021721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.021921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.021931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.022236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.022247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.022572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.022583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.022901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.022911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.023129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.023140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.023442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.023453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.023856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.023866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.024144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.024154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.024457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.024472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.024761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.024771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.025076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.025086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.025391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.157 [2024-10-09 11:18:46.025401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.157 qpair failed and we were unable to recover it. 00:38:26.157 [2024-10-09 11:18:46.025687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.025698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.025989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.025998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.026354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.026364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.026657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.026667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.026985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.026994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.027282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.027295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.027595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.027605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.027965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.027976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.028258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.028268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.028581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.028591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.028878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.028889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.029190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.029199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.029505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.029515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.029813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.029822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.030091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.030101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.030412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.030421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.030712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.030722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.031017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.031027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.031187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.031198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.031568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.031579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.031908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.031917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.032234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.032244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.032457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.032471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.032778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.032789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.033016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.033026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.033356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.033367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.033688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.033701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.034007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.034017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.034307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.034317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.034620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.034630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.034931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.034942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.035248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.035258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.035559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.035570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.035870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.035880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.036183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.036193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.036374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.036385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.036699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.036710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.036988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.036998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.037349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.037359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.158 qpair failed and we were unable to recover it. 00:38:26.158 [2024-10-09 11:18:46.037672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.158 [2024-10-09 11:18:46.037683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.037986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.037996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.038284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.038293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.038615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.038625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.038909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.038920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.039232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.039242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.039527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.039537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.039849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.039861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.040250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.040260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.040637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.040647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.040942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.040952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.041261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.041271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.041544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.041555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.041888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.041898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.042186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.042195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.042502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.042512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.042879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.042888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.043091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.043101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.043394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.043404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.043808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.043819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.044191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.044201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.044510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.044520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.044883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.044893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.045201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.045211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.045495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.045505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.045713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.045723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.045990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.046001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.046348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.046358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.046629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.046640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.046951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.046962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.047265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.047275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.047484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.047495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.047792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.047802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.159 [2024-10-09 11:18:46.048107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.159 [2024-10-09 11:18:46.048117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.159 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.048397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.048409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.048688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.048699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.049008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.049018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.049321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.049330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.049615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.049627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.049937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.049947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.050255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.050265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.050549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.050559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.050877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.050887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.051173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.051183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.051357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.051368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.051672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.051682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.051988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.051998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.052298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.052308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.052621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.052631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.052944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.052955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.053254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.053265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.053568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.053579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.053872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.053882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.054186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.054196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.054484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.054494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.054801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.054811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.055121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.055131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.055445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.055455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.055750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.055761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.056074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.056084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.056352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.056362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.056589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.056605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.056933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.056942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.057222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.057233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.057531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.057541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.057853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.057863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.058143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.058152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.058459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.058473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.058780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.058789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.059124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.059135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.059443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.059453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.059774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.059790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.060109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.060118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.160 qpair failed and we were unable to recover it. 00:38:26.160 [2024-10-09 11:18:46.060422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.160 [2024-10-09 11:18:46.060432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.060807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.060817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.061098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.061109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.061322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.061332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.061665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.061676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.062007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.062017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.062322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.062332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.062631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.062641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.062845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.062854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.063027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.063038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.063366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.063377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.063693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.063703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.064005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.064016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.064369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.064379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.064660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.064670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.064990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.064999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.065284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.065294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.065572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.065582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.065889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.065899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.066237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.066247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.066554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.066564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.066866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.066875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.067165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.067175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.067485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.067495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.067817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.067827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.068109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.068119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.068429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.068439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.068755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.068766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.069073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.069083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.069416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.069426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.069742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.069753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.070052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.070061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.070365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.070374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.070561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.070571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.070849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.070859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.071178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.071188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.071499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.071509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.071641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.071651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.071981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.071991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.072334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.072345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.161 [2024-10-09 11:18:46.072665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.161 [2024-10-09 11:18:46.072675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.161 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.072868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.072878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.073187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.073196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.073515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.073526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.073845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.073855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.074180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.074190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.074523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.074533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.074842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.074852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.075154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.075164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.075513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.075524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.075822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.075832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.076131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.076141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.076445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.076454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.076807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.076817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.077123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.077133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.077438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.077449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.077764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.077777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.077907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.077917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.078219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.078230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.078540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.078551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.078914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.078923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.079168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.079178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.079500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.079511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.079814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.079823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.080124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.080134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.080415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.080424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.080629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.080639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.080948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.080957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.081229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.081239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.081513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.081523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.081718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.081728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.082039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.082049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.082341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.082351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.082563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.082575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.082936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.082946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.083237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.083253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.162 [2024-10-09 11:18:46.083590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.162 [2024-10-09 11:18:46.083600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.162 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.083959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.083970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.084294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.084305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.084606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.084617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.084930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.084940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.085269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.085279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.085584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.085594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.085895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.085907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.086195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.086206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.086395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.086405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.086724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.086734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.087015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.087025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.087231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.087242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.087549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.087559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.087855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.087865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.088174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.088183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.088486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.088497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.088801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.088811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.089093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.089103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.089416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.089426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.089735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.089746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.090078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.090087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.090401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.090411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.090723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.090733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.091036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.091047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.091323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.091333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.091524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.091534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.091882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.091892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.092070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.439 [2024-10-09 11:18:46.092081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.439 qpair failed and we were unable to recover it. 00:38:26.439 [2024-10-09 11:18:46.092384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.092394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.092706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.092717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.092891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.092901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.093218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.093229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.093519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.093530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.093844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.093856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.094170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.094180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.094363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.094372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.094670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.094680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.095070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.095080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.095380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.095391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.095582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.095592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.095898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.095908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.096212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.096222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.096531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.096541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.096870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.096880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.097162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.097172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.097484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.097494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.097821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.097831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.098112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.098122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.098403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.098413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.098607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.098617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.098955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.098965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.099136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.099146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.099455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.099471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.099670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.099680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.099996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.100006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.100206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.100216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.100523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.100533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.100841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.100851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.101181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.101190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.101501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.101512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.101828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.101838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.102200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.102211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.102513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.102523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.102820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.102830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.103194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.103204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.103479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.103490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.103795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.103805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.104118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.440 [2024-10-09 11:18:46.104128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.440 qpair failed and we were unable to recover it. 00:38:26.440 [2024-10-09 11:18:46.104416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.104427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.104728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.104739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.105046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.105056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.105363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.105373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.105680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.105691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.105964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.105974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.106161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.106174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.106467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.106478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.106772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.106782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.107062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.107071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.107379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.107389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.107704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.107724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.108003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.108013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.108217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.108227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.108413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.108423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.108764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.108774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.109079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.109090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.109394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.109403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.109684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.109694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.109997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.110007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.110287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.110297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.110614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.110624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.110904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.110915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.111217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.111227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.111512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.111523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.111823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.111832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.112137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.112147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.112355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.112364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.112679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.112689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.113007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.113016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.113293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.113304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.113615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.113625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.113908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.113918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.114227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.114240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.114545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.114557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.114858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.114868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.115159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.115170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.115352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.115362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.115557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.115568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.115776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.115786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.441 [2024-10-09 11:18:46.116099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.441 [2024-10-09 11:18:46.116109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.441 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.116404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.116414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.116723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.116733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.117049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.117059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.117431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.117442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.117744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.117755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.118020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.118030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.118344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.118355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.118755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.118765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.119073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.119083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.119269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.119284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.119608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.119617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.119915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.119926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.120230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.120240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.120386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.120396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.120612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.120623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.120981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.120991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.121296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.121306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.121502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.121512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.121807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.121817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.122094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.122106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.122411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.122421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.122721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.122731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.122939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.122949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.123250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.123259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.123594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.123605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.123898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.123909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.124079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.124090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.124385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.124395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.124704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.124715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.125019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.125029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.125175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.125187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.125441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.125451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.125756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.125767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.126066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.126076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.126384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.126394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.126699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.126709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.126987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.126997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.127309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.127319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.127629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.127639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.127920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.127930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.442 qpair failed and we were unable to recover it. 00:38:26.442 [2024-10-09 11:18:46.128209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.442 [2024-10-09 11:18:46.128218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.128528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.128538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.128841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.128851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.129158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.129168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.129353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.129364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.129686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.129696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.130002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.130012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.130299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.130310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.130625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.130635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.130939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.130948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.131262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.131272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.131574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.131585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.131902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.131911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.132217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.132227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.132534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.132544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.132837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.132847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.133129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.133138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.133352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.133362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.133685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.133696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.133998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.134009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.134349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.134359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.134675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.134686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.134990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.135000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.135256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.135265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.135585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.135595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.135872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.135882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.136194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.136204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.136518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.136528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.136828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.136838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.137139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.137149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.137449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.137459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.137772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.137782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.138080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.138091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.138392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.443 [2024-10-09 11:18:46.138402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.443 qpair failed and we were unable to recover it. 00:38:26.443 [2024-10-09 11:18:46.138709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.138719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.139022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.139032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.139268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.139278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.139540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.139551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.139865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.139874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.140153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.140164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.140450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.140460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.140762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.140772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.141056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.141066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.141380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.141390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.141695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.141705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.141908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.141918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.142242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.142252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.142425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.142438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.142799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.142809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.143117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.143127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.143430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.143439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.143791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.143802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.143974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.143984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.144196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.144206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.144477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.144487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.144783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.144793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.145081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.145092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.145397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.145407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.145602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.145612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.145851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.145861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.146033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.146043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.146367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.146377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.146698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.146708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.147009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.147019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.147375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.147386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.147779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.147789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.148065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.148075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.148380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.148390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.148693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.148702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.149006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.149016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.149226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.149236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.149442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.149452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.149644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.149655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.150046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.150057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.150400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.444 [2024-10-09 11:18:46.150413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.444 qpair failed and we were unable to recover it. 00:38:26.444 [2024-10-09 11:18:46.150725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.150736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.151017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.151028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.151334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.151345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.151727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.151737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.152007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.152016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.152334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.152343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.152505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.152516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.152823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.152833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.153148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.153157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.153492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.153504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.153794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.153804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.154088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.154097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.154401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.154410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.154705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.154715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.155027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.155037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.155341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.155351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.155646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.155658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.155949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.155960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.156265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.156275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.156586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.156597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.156889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.156899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.157227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.157238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.157557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.157567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.157873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.157882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.158162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.158173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.158450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.158460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.158777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.158788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.159094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.159104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.159367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.159376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.159687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.159697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.160029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.160039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.160285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.160294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.160564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.160574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.160891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.160901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.161201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.161211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.161516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.161526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.161846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.161856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.162136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.162146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.162343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.162353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.162612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.162622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.162900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.445 [2024-10-09 11:18:46.162911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.445 qpair failed and we were unable to recover it. 00:38:26.445 [2024-10-09 11:18:46.163219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.163229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.163429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.163438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.163784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.163794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.164076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.164086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.164396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.164406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.164711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.164721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.165030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.165040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.165296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.165306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.165629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.165639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.165953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.165963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.166265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.166275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.166560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.166570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.166891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.166900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.167202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.167212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.167522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.167533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.167840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.167849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.168134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.168144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.168446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.168456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.168671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.168682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.168993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.169003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.169358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.169367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.169695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.169705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.170030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.170039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.170346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.170355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.170677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.170687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.170891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.170901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.171234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.171246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.171574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.171585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.171895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.171905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.172208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.172218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.172521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.172531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.172798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.172807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.173085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.173095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.173327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.173338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.173632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.173642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.173949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.173959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.174227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.174237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.174528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.174538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.174895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.174905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.175212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.175222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.446 [2024-10-09 11:18:46.175506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.446 [2024-10-09 11:18:46.175517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.446 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.175824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.175834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.176139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.176149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.176478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.176489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.176857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.176867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.177177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.177187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.177472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.177482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.177798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.177808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.177993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.178003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.178334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.178344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.178590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.178601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.178909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.178919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.179213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.179223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.179414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.179427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.179700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.179710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.180020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.180029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.180224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.180234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.180652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.180663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.180941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.180952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.181257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.181267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.181546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.181557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.181860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.181871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.182071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.182080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.182398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.182408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.182608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.182618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.182910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.182919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.183244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.183254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.183536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.183546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.183947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.183957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.184183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.184192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.184510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.184520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.184827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.184837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.185128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.185138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.185433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.185443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.185780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.185790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.185998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.186008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.186304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.447 [2024-10-09 11:18:46.186313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.447 qpair failed and we were unable to recover it. 00:38:26.447 [2024-10-09 11:18:46.186618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.186629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.186940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.186950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.187142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.187152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.187482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.187495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.187792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.187801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.188120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.188129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.188438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.188448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.188803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.188814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.188973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.188983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.189190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.189200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.189400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.189410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.189728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.189738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.190039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.190049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.190357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.190367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.190677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.190688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.191000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.191010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.191316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.191327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.191629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.191640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.191923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.191933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.192214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.192223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.192536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.192546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.192850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.192859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.193158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.193168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.193448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.193457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.193758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.193768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.194086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.194096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.194406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.194416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.194714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.194724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.195016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.195026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.195286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.195297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.195609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.195620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.195948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.195958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.196282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.196293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.196593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.196603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.196905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.196914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.197191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.197201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.197510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.197520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.197827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.197837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.198133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.198149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.448 qpair failed and we were unable to recover it. 00:38:26.448 [2024-10-09 11:18:46.198428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.448 [2024-10-09 11:18:46.198438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.198633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.198643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.198979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.198988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.199383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.199393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.199694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.199705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.199985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.199995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.200299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.200309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.200589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.200599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.200795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.200805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.201109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.201118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.201425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.201435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.201749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.201759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.202049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.202067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.202254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.202264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.202458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.202470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.202757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.202766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.203083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.203093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.203395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.203405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.203702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.203713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.204101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.204111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.204488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.204498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.204816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.204826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.205000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.205009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.205312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.205321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.205682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.205692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.205993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.206003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.206312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.206321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.206628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.206638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.206912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.206922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.207230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.207240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.207527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.207537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.207833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.207843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.208162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.208174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.208479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.208489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.208777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.208786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.209091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.209102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.209383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.209393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.209574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.209586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.209927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.209937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.210250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.210260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.210471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.449 [2024-10-09 11:18:46.210481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.449 qpair failed and we were unable to recover it. 00:38:26.449 [2024-10-09 11:18:46.210798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.210807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.211112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.211121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.211420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.211430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.211749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.211759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.212137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.212147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.212454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.212471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.212791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.212801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.213084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.213093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.213409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.213419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.213688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.213699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.214004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.214014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.214342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.214353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.214675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.214685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.214995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.215005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.215312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.215322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.215617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.215627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.215931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.215940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.216248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.216258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.216558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.216570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.216886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.216896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.217207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.217217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.217527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.217537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.217892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.217903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.218205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.218215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.218379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.218388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.218754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.218764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.219079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.219089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.219366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.219376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.219743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.219754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.220052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.220061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.220435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.220445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.220751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.220762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.221071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.221081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.221440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.221450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.221756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.221767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.222051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.222061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.222400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.222411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.222715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.222725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.223011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.223021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.223353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.223363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.450 [2024-10-09 11:18:46.223642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.450 [2024-10-09 11:18:46.223652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.450 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.223968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.223978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.224280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.224290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.224612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.224622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.224923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.224933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.225072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.225083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.225407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.225416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.225721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.225732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.226039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.226049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.226326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.226336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.226632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.226642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.226931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.226941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.227250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.227260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.227420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.227431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.227709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.227720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.228041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.228051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.228327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.228336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.228631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.228641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.228958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.228968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.229289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.229299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.229611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.229621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.229939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.229948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.230109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.230119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.230493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.230503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.230808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.230817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.231124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.231133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.231450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.231460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.231784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.231793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.232102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.232112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.232407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.232416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.232728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.232739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.233020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.233030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.233332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.233341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.233632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.233643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.233848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.233858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.234036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.234047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.234366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.234375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.234679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.234690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.235000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.235010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.235196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.235206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.235498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.235508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.451 [2024-10-09 11:18:46.235817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.451 [2024-10-09 11:18:46.235828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.451 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.236135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.236145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.236480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.236491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.236772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.236782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.237063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.237073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.237373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.237385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.237687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.237697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.238045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.238055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.238259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.238269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.238491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.238501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.238806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.238816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.239124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.239134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.239390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.239399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.239719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.239729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.240019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.240029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.240340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.240350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.240628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.240639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.240893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.240903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.241185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.241195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.241509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.241522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.241817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.241827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.242113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.242123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.242423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.242432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.242733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.242743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.243050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.243059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.243363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.243373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.243658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.243668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.243978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.243988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.244293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.244303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.244608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.244619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.244908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.244918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.245225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.245234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.245537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.245550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.245862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.245872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.246150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.246159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.246210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.246220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.246561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.452 [2024-10-09 11:18:46.246572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.452 qpair failed and we were unable to recover it. 00:38:26.452 [2024-10-09 11:18:46.246875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.246885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.247188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.247198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.247479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.247489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.247713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.247722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.248040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.248050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.248359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.248368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.248680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.248691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.248995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.249005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.249176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.249186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.249398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.249408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.249698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.249708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.250036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.250046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.250394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.250403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.250713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.250722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.251108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.251117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.251404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.251415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.251762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.251772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.252134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.252144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.252479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.252489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.252773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.252783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.253094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.253104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.253414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.253425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.253706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.253719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.254011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.254022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.254337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.254347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.254510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.254521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.254791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.254801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.255009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.255019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.255328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.255337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.255611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.255621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.255953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.255963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.256242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.256259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.256565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.256575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.256941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.256951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.257172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.257181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.257538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.257548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.257896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.257906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.258097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.258107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.258414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.258424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.258685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.258695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.453 [2024-10-09 11:18:46.259019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.453 [2024-10-09 11:18:46.259029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.453 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.259335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.259345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.259505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.259516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.259724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.259735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.259978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.259988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.260282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.260292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.260520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.260530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.260827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.260838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.261081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.261091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.261413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.261422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.261776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.261786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.262181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.262191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.262530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.262540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.262734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.262745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.263049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.263059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.263343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.263352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.263548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.263558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.263870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.263880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.264167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.264176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.264347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.264357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.264634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.264644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.264946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.264957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.265253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.265262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.265433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.265444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.265717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.265729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.265906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.265916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.266216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.266226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.266544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.266555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.266882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.266892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.267201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.267211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.267545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.267557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.267866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.267876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.268160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.268170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.268443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.268452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.268645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.268655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.268997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.269007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.454 [2024-10-09 11:18:46.269298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.454 [2024-10-09 11:18:46.269308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.454 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.269614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.269624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.269822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.269832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.270159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.270169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.270476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.270487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.270685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.270695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.270987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.270997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.271309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.271319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.271612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.271622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.271941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.271950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.272237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.272247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.272553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.272563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.272775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.272785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.272965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.272976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.273171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.273184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.273473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.273483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.273778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.273788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.274112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.274121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.274178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.274189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.274459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.274473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.274776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.274786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.274988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.274998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.275415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.275425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.275758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.275768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.276074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.276084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.276388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.276398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.276703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.276714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.277036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.277046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.277369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.277379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.277588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.277598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.277816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.455 [2024-10-09 11:18:46.277826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.455 qpair failed and we were unable to recover it. 00:38:26.455 [2024-10-09 11:18:46.278132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.278142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.278439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.278450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.278802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.278812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.279140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.279151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.279435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.279445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.279759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.279770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.279962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.279972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.280245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.280256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.280582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.280592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.280791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.280801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.281009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.281021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.281207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.281219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.281528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.281539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.281863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.281873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.282192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.282202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.282485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.282495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.282814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.282824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.283116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.283127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.283433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.283443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.283766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.283776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.284081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.284090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.284398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.284408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.284593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.284604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.284933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.284942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.285273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.285283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.285484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.285494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.285818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.285828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.286033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.286043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.286371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.286382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.286675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.286685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.456 [2024-10-09 11:18:46.287027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.456 [2024-10-09 11:18:46.287037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.456 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.287349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.287359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.287684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.287695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.288005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.288015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.288203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.288213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.288553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.288563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.288803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.288813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.289129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.289138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.289452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.289462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.289659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.289669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.289795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.289804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.290015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.290025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.290211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.290221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.290533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.290543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.290846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.290857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.291136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.291146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.291434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.291445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.291765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.291775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.291999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.292009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.292319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.292328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.292604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.292614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.292814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.292825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.293156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.293166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.293454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.293468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.293523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.293534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.457 qpair failed and we were unable to recover it. 00:38:26.457 [2024-10-09 11:18:46.293875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.457 [2024-10-09 11:18:46.293884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.294170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.294181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.294390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.294400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.294720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.294730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.294933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.294943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.295257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.295267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.295612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.295622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.295913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.295923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.296082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.296093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.296208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.296218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.296539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.296549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.296830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.296840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.297168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.297178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.297505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.297515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.297837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.297847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.298167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.298178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.298492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.298503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.298818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.298828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.298991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.299001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.299268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.299278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.299587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.299597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.299967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.299978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.300316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.300325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.300496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.300511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.300724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.300734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.300943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.300953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.301261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.301271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.301580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.301590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.301945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.458 [2024-10-09 11:18:46.301955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.458 qpair failed and we were unable to recover it. 00:38:26.458 [2024-10-09 11:18:46.302255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.302265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.302593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.302603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.302925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.302935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.303251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.303261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.303568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.303578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.303719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.303730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.303917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.303928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.304247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.304256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.304571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.304582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.304904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.304914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.305109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.305118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.305336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.305346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.305639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.305650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.305827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.305838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.306166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.306176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.306345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.306355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.306535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.306546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.306883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.306893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.307105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.307116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.307450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.307460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.307763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.307774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.308086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.308099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.308413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.308423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.308731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.308742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.309091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.309102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.309404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.309414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.309604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.309615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.309807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.309818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.310205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.310216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.310529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.459 [2024-10-09 11:18:46.310539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.459 qpair failed and we were unable to recover it. 00:38:26.459 [2024-10-09 11:18:46.310945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.310955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.311263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.311272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.311663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.311673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.311999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.312009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.312237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.312247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.312567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.312578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.312901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.312911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.313183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.313192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.313486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.313496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.313804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.313814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.314125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.314135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.314459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.314473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.314754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.314764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.315073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.315083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.315405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.315414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.315728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.315738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.316045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.316054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.316379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.316388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.316678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.316691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.317008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.317018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.317310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.317320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.317631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.317642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.317950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.317960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.318268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.318278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.318448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.318458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.318693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.318703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.318985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.318995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.319290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.319300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.319600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.319609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.319791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.319800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.460 [2024-10-09 11:18:46.320030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.460 [2024-10-09 11:18:46.320041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.460 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.320246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.320256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.320558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.320569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.320890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.320899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.321205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.321214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.321524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.321534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.321853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.321863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.322179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.322189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.322360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.322370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.322692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.322701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.323007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.323017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.323341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.323351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.323597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.323607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.323923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.323933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.324255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.324266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.324591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.324601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.324886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.324896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.325169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.325179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.325462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.325476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.325676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.325686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.326008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.326018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.326322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.326332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.326519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.326529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.326828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.326837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.327155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.327164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.327375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.327385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.327691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.327701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.328050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.328060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.328346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.328357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.328650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.328661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.328865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.461 [2024-10-09 11:18:46.328875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.461 qpair failed and we were unable to recover it. 00:38:26.461 [2024-10-09 11:18:46.329051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.329061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.329345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.329356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.329624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.329635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.329952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.329963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.330144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.330156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.330462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.330474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.330855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.330865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.331169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.331178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.331395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.331405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.331719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.331729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.332035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.332045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.332368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.332378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.332685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.332695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.332998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.333008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.333314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.333324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.333634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.333644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.333956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.333966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.334257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.334267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.334579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.334589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.334891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.334901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.335182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.335192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.335497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.335507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.335835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.335848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.336135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.336147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.336456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.336473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.336783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.336795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.336980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.336990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.337316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.337326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.337629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.337640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.337953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.337963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.338258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.338268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.338549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.338559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.338915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.462 [2024-10-09 11:18:46.338925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.462 qpair failed and we were unable to recover it. 00:38:26.462 [2024-10-09 11:18:46.339220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.339230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.339503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.339514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.339806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.339815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.340127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.340137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.340430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.340440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.340748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.340758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.341042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.341052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.341247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.341257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.341494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.341505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.341791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.341800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.342169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.342179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.342458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.342470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.342752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.342762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.343086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.343095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.343375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.343384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.343660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.343670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.343975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.343984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.344273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.344283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.344579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.344590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.344898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.344910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.345197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.345206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.345524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.345534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.345858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.345869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.346177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.346186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.346449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.346459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.346768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.346778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.347060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.347070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.347262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.347272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.347550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.347560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.347872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.347882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.348163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.348173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.348418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.348429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.348755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.348765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.463 qpair failed and we were unable to recover it. 00:38:26.463 [2024-10-09 11:18:46.349043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.463 [2024-10-09 11:18:46.349053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.349342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.349353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.349675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.349686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.349989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.350000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.350307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.350317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.350605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.350615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.350934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.350945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.351248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.351257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.351562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.351572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.351865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.351874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.352192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.352201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.352507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.352517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.352826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.352836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.353156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.353165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.353470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.353480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.353803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.353814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.354127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.354136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.354462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.354476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.354777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.354788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.355088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.355098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.355427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.355437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.355735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.355746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.356047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.356057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.356362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.356373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.356538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.356550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.356874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.356883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.357191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.357201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.357522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.357532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.357819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.357829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.358183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.358192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.358474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.358484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.358795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.358805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.359081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.359098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.359431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.359440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.359837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.359847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.360146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.360156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.360485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.360495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.360869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.360878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.361147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.361157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.361433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.361442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.361763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.464 [2024-10-09 11:18:46.361774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.464 qpair failed and we were unable to recover it. 00:38:26.464 [2024-10-09 11:18:46.362103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.362114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.362324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.362335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.362662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.362672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.362979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.362989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.363269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.363279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.363660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.363670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.363949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.363959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.364262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.364272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.364557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.364567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.364878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.364888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.365228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.365238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.365452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.365461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.365654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.365664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.365980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.365991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.366346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.366355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.366632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.366642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.366961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.366971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.367252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.367263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.367530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.367540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.367832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.367842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.368119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.368129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.368394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.368404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.368714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.368724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.369020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.369030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.369367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.369378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.369684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.369695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.369995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.370005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.370325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.370335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.370615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.370625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.370907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.370917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.371227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.371236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.371411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.371422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.371736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.371746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.372063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.372073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.372336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.372346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.372638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.372649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.372950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.372960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.373269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.373279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.373590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.373599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.373911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.373921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.465 [2024-10-09 11:18:46.374120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.465 [2024-10-09 11:18:46.374132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.465 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.374337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.374347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.374523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.374533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.374805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.374816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.375099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.375108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.375239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.375248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.375633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.375643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.375970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.375980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.376165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.376176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.376483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.376493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.376686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.376697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.377010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.377020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.377410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.377420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.377697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.377708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.377890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.377908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.378217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.378227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.378507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.378517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.378803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.378813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.379095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.379106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.379409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.379419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.379800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.379810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.380002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.380011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.380115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.380124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.380427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.380437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.380744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.380754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.380928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.380938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.381306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.381317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.381533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.381545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.381894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.381904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.382190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.382200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.382534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.382544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.382835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.382845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.383147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.383157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.383437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.383447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.383725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.383735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.384030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.384040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.384344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.384354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.384650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.384660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.384850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.384859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.385121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.385131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.385429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.466 [2024-10-09 11:18:46.385439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.466 qpair failed and we were unable to recover it. 00:38:26.466 [2024-10-09 11:18:46.385732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.385743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.386045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.386054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.386357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.386367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.386695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.386705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.386980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.386990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.387309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.387319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.387505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.387515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.387794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.387804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.388087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.388097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.388412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.388421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.388606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.388617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.388850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.388860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.389156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.389167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.389399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.389409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.389766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.389776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.390095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.390105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.390399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.390409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.390712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.390723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.390993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.391003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.391335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.391345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.391509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.391519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.391807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.391816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.392137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.392147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.392434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.392445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.392664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.392674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.393014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.393023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.393219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.393229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.393523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.393533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.393757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.393766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.394123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.394132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.394436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.394446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.394727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.394737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.394928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.394938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.395246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.395256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.395540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.395551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.395854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.467 [2024-10-09 11:18:46.395863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.467 qpair failed and we were unable to recover it. 00:38:26.467 [2024-10-09 11:18:46.396212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.396221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.396612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.396623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.396946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.396956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.397268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.397278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.397600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.397610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.397948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.397958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.398264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.398274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.398639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.398649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.398935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.398946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.399250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.399259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.399533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.399543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.399839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.399849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.400246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.400255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.400552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.400562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.400885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.400894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.401201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.401211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.401379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.401390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.401673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.401683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.402009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.402021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.402340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.402351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.402627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.402637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.402929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.402938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.403242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.403252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.403527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.403537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.403832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.403849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.404162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.404171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.404483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.404494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.404802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.404812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.405126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.405136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.405445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.405455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.405746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.405756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.406050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.406060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.406357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.406367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.406675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.406686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.406868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.406878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.407166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.407175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.407474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.407484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.407767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.407776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.408078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.408088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.408392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.468 [2024-10-09 11:18:46.408401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.468 qpair failed and we were unable to recover it. 00:38:26.468 [2024-10-09 11:18:46.408704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.408714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.409033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.409042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.409323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.409334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.409646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.409656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.409937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.409946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.410261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.410274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.410580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.410590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.410874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.410885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.411161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.411171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.411475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.411485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.411828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.411838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.412142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.412151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.412437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.412446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.412734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.412753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.413064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.413074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.413450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.413460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.413778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.413788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.413978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.413987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.414323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.414333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.414663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.414673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.414967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.414978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.415321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.415332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.415628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.415638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.415951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.415961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.416317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.416327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.416644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.416655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.416964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.416973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.417273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.417283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.417598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.417608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.417917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.417927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.418206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.418217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.418519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.418529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.418824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.418834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.419146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.419156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.419464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.419477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.419790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.419799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.420080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.420090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.420393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.420402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.420659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.420670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.420991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.421001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.421282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.469 [2024-10-09 11:18:46.421292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.469 qpair failed and we were unable to recover it. 00:38:26.469 [2024-10-09 11:18:46.421502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.421512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.421744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.421753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.422055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.422065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.422385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.422394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.422701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.422711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.422896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.422907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.423190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.423199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.423389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.423400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.470 [2024-10-09 11:18:46.423720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.470 [2024-10-09 11:18:46.423730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.470 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.424009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.424019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.424323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.424333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.424662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.424673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.424979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.424989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.425296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.425306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.425488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.425497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.425800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.425809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.426121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.426130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.426412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.426422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.426609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.426620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.426922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.426932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.427230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.427239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.427541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.427551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.427865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.427875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.428078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.428087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.428314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.745 [2024-10-09 11:18:46.428324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.745 qpair failed and we were unable to recover it. 00:38:26.745 [2024-10-09 11:18:46.428630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.428641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.428945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.428955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.429235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.429244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.429624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.429634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.429941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.429950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.430150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.430159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.430475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.430486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.430786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.430798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.430963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.430974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.431285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.431295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.431638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.431649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.431926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.431936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.432244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.432254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.432558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.432568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.432852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.432862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.433171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.433180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.433568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.433578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.433894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.433904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.434191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.434201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.434483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.434494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.434801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.434811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.435097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.435107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.435404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.435414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.435703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.435714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.435893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.435904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.436208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.436217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.436499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.436509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.436803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.436812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.437105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.437114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.437384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.437394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.437711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.437721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.437882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.437893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.438221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.438231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.438536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.438547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.438832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.438844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.439142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.439152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.439475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.439486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.439827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.439837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.440116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.440125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.440407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.440418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.746 [2024-10-09 11:18:46.440645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.746 [2024-10-09 11:18:46.440655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.746 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.440961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.440971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.441263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.441272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.441579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.441589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.441895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.441905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.442222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.442231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.442564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.442574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.442880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.442891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.443195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.443205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.443507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.443517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.443732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.443741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.444071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.444081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.444353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.444363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.444686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.444696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.445022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.445031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.445343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.445353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.445633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.445643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.446009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.446018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.446321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.446331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.446638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.446648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.446960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.446970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.447274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.447285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.447496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.447506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.447854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.447863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.448144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.448154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.448463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.448479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.448779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.448789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.449094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.449104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.449498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.449508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.449849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.449859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.450186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.450195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.450499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.450511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.450720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.450729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.450945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.450955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.451263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.451273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.451595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.451606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.451921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.451931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.452243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.452253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.452534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.452544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.452865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.453150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.453160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.747 qpair failed and we were unable to recover it. 00:38:26.747 [2024-10-09 11:18:46.453431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.747 [2024-10-09 11:18:46.453441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.453735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.453747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.454020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.454031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.454246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.454256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.454434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.454446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.454746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.454756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.455063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.455072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.455381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.455390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.455594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.455605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.455930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.455940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.456243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.456253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.456562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.456572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.456854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.456865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.457145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.457154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.457469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.457479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.457760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.457769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.458086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.458096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.458390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.458400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.458635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.458645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.458970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.458980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.459282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.459292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.459503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.459515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.459840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.459849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.460130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.460140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.460442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.460451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.460759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.460769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.461073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.461082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.461364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.461373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.461643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.461653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.461941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.461950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.462132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.462142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.462342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.462351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.462679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.462689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.463023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.463033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.463411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.463421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.463804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.463814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.464013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.464023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.464319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.464329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.464516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.464527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.464807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.464816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.748 [2024-10-09 11:18:46.465126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.748 [2024-10-09 11:18:46.465135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.748 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.465463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.465476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.465715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.465725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.466000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.466010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.466298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.466308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.466593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.466603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.466913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.466923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.467240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.467249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.467486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.467498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.467793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.467803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.468105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.468115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.468390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.468400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.468687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.468699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.468981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.468990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.469296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.469306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.469612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.469623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.469816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.469827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.470135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.470146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.470445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.470455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.470757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.470767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.470975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.470985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.471296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.471306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.471617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.471627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.471905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.471915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.472227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.472237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.472528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.472538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.472842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.472853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.473156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.473166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.473448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.473459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.473744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.473754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.474055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.474066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.474377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.474388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.474695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.474706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.474893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.474904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.475216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.475226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.749 [2024-10-09 11:18:46.475527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.749 [2024-10-09 11:18:46.475539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.749 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.475921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.475930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.476220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.476230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.476585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.476596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.476809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.476820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.477025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.477035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.477343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.477352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.477672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.477683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.477987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.477997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.478284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.478294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.478595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.478605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.478917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.478927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.479240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.479249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.479531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.479541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.479838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.479849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.480144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.480154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.480473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.480483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.480649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.480659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.480996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.481006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.481318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.481328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.481712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.481722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.482066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.482075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.482357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.482366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.482635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.482645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.482962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.482972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.483279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.483289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.483590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.483600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.483888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.483898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.484196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.484206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.484516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.484527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.484703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.484714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.484988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.484997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.485310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.485320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.485631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.485641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.485909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.485919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.486123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.486133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.486295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.486305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.486568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.486578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.486867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.486876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.487179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.487188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.487495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.487506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.750 qpair failed and we were unable to recover it. 00:38:26.750 [2024-10-09 11:18:46.487808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.750 [2024-10-09 11:18:46.487824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.488164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.488174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.488490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.488501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.488802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.488813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.489121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.489132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.489426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.489436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.489747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.489757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.490055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.490065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.490434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.490446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.490705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.490716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.491019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.491029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.491309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.491320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.491508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.491519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.491894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.491904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.492114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.492123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.492341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.492352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.492564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.492575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.492880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.492890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.493210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.493220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.493408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.493419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.493697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.493707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.494004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.494014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.494318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.494330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.494616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.494627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.494941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.494951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.495308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.495319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.495649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.495660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.495977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.495989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.496290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.496300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.496590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.496600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.496905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.496916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.497235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.497245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.497354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.497364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.497629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.497640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.497922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.497931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.498308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.498318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.498615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.498626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.498977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.498987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.499268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.499278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.499483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.499495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.499714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.751 [2024-10-09 11:18:46.499724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.751 qpair failed and we were unable to recover it. 00:38:26.751 [2024-10-09 11:18:46.500038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.500048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.500378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.500388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.500689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.500701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.501004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.501015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.501203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.501214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.501416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.501426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.501741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.501752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.502077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.502088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.502293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.502303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.502528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.502539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.502877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.502889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.503202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.503212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.503519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.503531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.503860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.503874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.504212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.504224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.504531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.504542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.504842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.504853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.505174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.505184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.505490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.505500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.505754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.505765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.506074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.506085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.506294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.506305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.506693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.506704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.506980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.506991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.507314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.507325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.507689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.507699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.508020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.508030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.508334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.508344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.508667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.508678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.508977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.508988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.509294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.509304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.509610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.509621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.509925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.509936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.510215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.510226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.510529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.510540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.510866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.510877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.511182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.511193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.511512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.511523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.511706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.511716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.512021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.512031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.752 [2024-10-09 11:18:46.512326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.752 [2024-10-09 11:18:46.512338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.752 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.512707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.512718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.513034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.513045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.513379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.513390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.513604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.513615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.513933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.513943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.514247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.514258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.514580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.514591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.514908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.514918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.515198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.515209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.515395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.515406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.515758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.515769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.516078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.516088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.516282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.516292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.516515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.516526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.516853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.516864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.517157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.517167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.517521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.517532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.517712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.517724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.518048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.518059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.518361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.518371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.518734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.518745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.519047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.519057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.519449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.519460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.519680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.519690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.520011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.520021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.520204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.520215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.520519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.520530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.520829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.520839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.521146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.521156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.521464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.521478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.521784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.521793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.522098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.522108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.522395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.522405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.522577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.522588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.522788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.522798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.523077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.523097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.523422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.523433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.523747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.523757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.524060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.524071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.524350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.753 [2024-10-09 11:18:46.524359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.753 qpair failed and we were unable to recover it. 00:38:26.753 [2024-10-09 11:18:46.524679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.524690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.525000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.525010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.525199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.525209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.525431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.525440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.525752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.525762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.526078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.526088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.526396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.526406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.526753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.526764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.527047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.527056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.527374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.527385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.527681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.527692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.528031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.528041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.528337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.528346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.528505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.528516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.528794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.528804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.529011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.529020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.529356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.529366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.529678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.529688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.529999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.530009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.530312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.530323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.530633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.530643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.530953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.530962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.531257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.531267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.531571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.531581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.531655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.531664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.531991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.532001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.532309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.532319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.532626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.532640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.532968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.532979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.533264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.533275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.533575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.533586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.533895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.533904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.534105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.534114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.534309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.534320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.534663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.534673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.754 [2024-10-09 11:18:46.534846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.754 [2024-10-09 11:18:46.534857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.754 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.535140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.535150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.535484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.535494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.535844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.535853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.536056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.536066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.536271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.536281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.536574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.536584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.536874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.536885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.537045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.537056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.537375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.537385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.537688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.537698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.538040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.538050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.538364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.538374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.538656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.538666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.538969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.538979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.539165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.539175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.539391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.539401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.539763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.539773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.540092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.540102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.540412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.540424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.540730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.540741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.541084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.541094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.541375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.541385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.541700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.541710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.541996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.542007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.542313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.542323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.542384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.542393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.542687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.542697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.542891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.542901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.543223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.543232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.543549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.543559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.543763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.543773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.544150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.544160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.544477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.544487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.544844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.544854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.545174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.545184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.545365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.545375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.545618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.545629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.545824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.545834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.546226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.546235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.546618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.546628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.755 qpair failed and we were unable to recover it. 00:38:26.755 [2024-10-09 11:18:46.546948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.755 [2024-10-09 11:18:46.546958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.547268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.547277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.547577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.547595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.547779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.547789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.548093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.548103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.548302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.548311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.548620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.548630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.549036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.549046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.549250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.549260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.549579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.549588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.549948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.549958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.550151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.550161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.550434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.550444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.550762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.550772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.551159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.551170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.551421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.551432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.551658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.551668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.551836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.551847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.552170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.552181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.552488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.552499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.552833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.552842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.553151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.553161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.553446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.553455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.553743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.553753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.554078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.554089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.554401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.554410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.554741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.554751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.555063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.555073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.555276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.555285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.555614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.555624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.555834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.555845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.556125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.556134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.556449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.556458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.556679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.556689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.557000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.557009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.557209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.557218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.557565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.557576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.557754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.557764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.557943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.557953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.558277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.558286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.558590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.756 [2024-10-09 11:18:46.558600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.756 qpair failed and we were unable to recover it. 00:38:26.756 [2024-10-09 11:18:46.558916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.558926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.559198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.559207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.559419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.559429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.559625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.559635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.559968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.559978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.560295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.560306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.560677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.560687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.560975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.560985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.561296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.561305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.561604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.561615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.561930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.561939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.562134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.562144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.562486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.562496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.562795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.562805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.563005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.563015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.563339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.563349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.563542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.563553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.563724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.563735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.564043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.564053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.564375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.564384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.564577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.564587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.564869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.564879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.565173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.565184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.565482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.565493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.565882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.565892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.566186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.566196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.566394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.566403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.566722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.566732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.567035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.567045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.567438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.567447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.567765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.567775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.568096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.568106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.568409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.568420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.568713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.568724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.568884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.568894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.569215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.569224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.569493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.569503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.569834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.569844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.570161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.570171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.570471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.570481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.570793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.757 [2024-10-09 11:18:46.570803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.757 qpair failed and we were unable to recover it. 00:38:26.757 [2024-10-09 11:18:46.571083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.571092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.571406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.571416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.571707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.571718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.572028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.572038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.572198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.572209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.572541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.572551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.572729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.572740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.572947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.572956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.573258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.573269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.573571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.573581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.573901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.573911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.574246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.574257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.574555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.574565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.574878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.574888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.575192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.575202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.575479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.575489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.575819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.575829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.576111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.576121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.576321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.576333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.576624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.576634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.576918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.576927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.577119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.577130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.577478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.577488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.577910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.577920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.578219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.578229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.578409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.578420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.578587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.578597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.578829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.578838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.579131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.579141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.579447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.579457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.579767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.579778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.580133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.580143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.580442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.580452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.580641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.580651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.580963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.580972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.581278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.758 [2024-10-09 11:18:46.581287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.758 qpair failed and we were unable to recover it. 00:38:26.758 [2024-10-09 11:18:46.581573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.581583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.581907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.581916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.582222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.582232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.582517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.582527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.582849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.582859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.583170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.583181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.583482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.583492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.583757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.583767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.584057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.584066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.584383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.584393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.584581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.584591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.584868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.584877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.585175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.585184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.585481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.585491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.585804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.585813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.586097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.586107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.586417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.586427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.586736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.586746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.587061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.587070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.587375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.587385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.587683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.587693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.587997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.588007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.588302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.588312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.588593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.588603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.588883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.588894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.589204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.589213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.589518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.589528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.589609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.589620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.589923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.589933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.590225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.590235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.590528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.590539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.590848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.590857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.591142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.591152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.591468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.591478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.591783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.591793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.592114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.592124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.592398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.592407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.592733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.592743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.593049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.593059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.593368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.593378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.593676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.759 [2024-10-09 11:18:46.593686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.759 qpair failed and we were unable to recover it. 00:38:26.759 [2024-10-09 11:18:46.593996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.594006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.594304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.594314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.594509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.594519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.594840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.594850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.595157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.595167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.595476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.595486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.595805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.595816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.596094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.596104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.596321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.596331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.596636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.596648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.596940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.596949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.597264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.597273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.597583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.597593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.597906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.597916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.598305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.598315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.598619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.598629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.598951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.598961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.599250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.599261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.599565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.599575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.599871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.599881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.600066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.600077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.600405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.600415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.600749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.600759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.601086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.601097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.601400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.601409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.601692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.601703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.602011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.602021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.602372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.602382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.602712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.602722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.603030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.603040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.603198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.603209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.603500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.603510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.603810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.603820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.604138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.604149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.604456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.604469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.604763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.604773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.604974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.604987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.605301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.605311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.605609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.605620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.605935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.605945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.760 [2024-10-09 11:18:46.606250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.760 [2024-10-09 11:18:46.606260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.760 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.606533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.606543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.606741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.606751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.606964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.606974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.607280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.607290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.607603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.607614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.607911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.607921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.608196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.608206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.608516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.608527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.608817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.608826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.609129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.609138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.609422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.609432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.609745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.609755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.610067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.610077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.610259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.610270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.610563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.610573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.610800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.610811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.611108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.611118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.611506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.611516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.611805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.611814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.612118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.612128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.612426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.612436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.612742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.612753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.613029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.613038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.613343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.613353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.613679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.613689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.614077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.614087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.614398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.614408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.614746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.614756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.615058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.615068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.615429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.615439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.615735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.615746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.616090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.616100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.616286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.616296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.616580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.616590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.616897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.616907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.617179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.617189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.617486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.617503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.617819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.617829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.618110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.618120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.618429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.618439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.761 qpair failed and we were unable to recover it. 00:38:26.761 [2024-10-09 11:18:46.618763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.761 [2024-10-09 11:18:46.618773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.619138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.619148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.619445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.619456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.619665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.619676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.619964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.619974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.620278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.620289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.620615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.620625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.620802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.620812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.621173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.621183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.621455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.621474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.621876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.621886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.622169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.622180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.622486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.622496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.622787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.622798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.623107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.623117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.623399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.623409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.623731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.623741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.624036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.624047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.624374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.624384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.624688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.624699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.625002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.625012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.625316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.625327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.625568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.625578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.625914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.625925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.626231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.626241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.626529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.626539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.626855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.626864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.627169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.627179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.627490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.627500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.627832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.627842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.628042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.628051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.628418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.628427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.628731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.628741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.629048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.629058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.629337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.629346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.629636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.629646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.762 [2024-10-09 11:18:46.629959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.762 [2024-10-09 11:18:46.629969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.762 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.630278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.630288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.630670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.630680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.630996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.631006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.631206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.631216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.631612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.631629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.631927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.631937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.632229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.632239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.632543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.632553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.632872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.632882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.633173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.633183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.633492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.633503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.633815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.633825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.634111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.634122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.634442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.634453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.634768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.634779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.635082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.635091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.635397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.635407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.635701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.635711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.636014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.636023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.636327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.636337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.636679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.636690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.636970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.636979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.637261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.637271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.637454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.637469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.637864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.637873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.638205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.638214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.638518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.638528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.638818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.638828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.639145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.639154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.639446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.639455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.639786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.639797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.640107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.640117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.640444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.640454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.640780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.640791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.641071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.641080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.641273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.641283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.641628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.641638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.641846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.641856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.642151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.642160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.642394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.642404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.763 qpair failed and we were unable to recover it. 00:38:26.763 [2024-10-09 11:18:46.642714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.763 [2024-10-09 11:18:46.642727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.643016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.643025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.643330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.643340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.643719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.643729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.644006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.644016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.644321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.644331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.644483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.644493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.644793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.644803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.645113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.645123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.645410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.645419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.645727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.645738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.646034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.646044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.646337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.646347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.646638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.646648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.646956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.646967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.647272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.647282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.647567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.647577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.647902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.647911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.648219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.648229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.648543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.648553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.648855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.648864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.649170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.649179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.649486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.649496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.649684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.649693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.650012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.650021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.650305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.650314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.650597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.650607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.650894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.650904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.651173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.651183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.651458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.651471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.651776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.651786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.652087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.652097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.652402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.652412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.652718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.652728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.653057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.653067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.653371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.653381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.653660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.653671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.653956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.653966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.654245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.654256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.654528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.654539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.654853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.764 [2024-10-09 11:18:46.654863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.764 qpair failed and we were unable to recover it. 00:38:26.764 [2024-10-09 11:18:46.655148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.655161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.655462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.655478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.655857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.655866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.656169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.656179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.656477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.656487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.656751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.656761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.657070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.657079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.657384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.657394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.657731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.657743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.658037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.658048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.658313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.658323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.658530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.658540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.658855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.658865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.659167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.659176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.659482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.659493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.659831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.659841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.660134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.660144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.660448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.660458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.660752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.660763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.661102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.661112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.661474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.661485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.661770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.661780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.662084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.662094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.662397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.662406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.662712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.662722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.663072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.663081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.663395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.663405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.663718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.663734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.663938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.663948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.664234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.664245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.664526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.664537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.664849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.664861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.665193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.665204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.665406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.665417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.665737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.665749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.665918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.665929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.666257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.666269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.666458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.666472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.666757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.666768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.667069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.667080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.667315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.765 [2024-10-09 11:18:46.667326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.765 qpair failed and we were unable to recover it. 00:38:26.765 [2024-10-09 11:18:46.667606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.667618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.667956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.667967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.668162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.668172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.668470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.668481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.668764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.668775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.669062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.669073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.669254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.669265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.669563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.669575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.669885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.669897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.670198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.670210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.670513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.670523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.670837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.670848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.671146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.671157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.671519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.671532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.671815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.671825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.672155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.672165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.672475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.672487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.672762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.672772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.673079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.673091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.673382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.673393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.673744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.673755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.674064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.674076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.674381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.674392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.674706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.674718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.675032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.675043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.675346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.675357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.675651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.675663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.676005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.676016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.676201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.676212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.676487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.676498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.676825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.676837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.677114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.677125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.677350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.677360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.677748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.677759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.677962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.677973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.678186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.678198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.678401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.678413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.678723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.678735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.679090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.679101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.679433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.679444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.766 [2024-10-09 11:18:46.679663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.766 [2024-10-09 11:18:46.679675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.766 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.679989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.680001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.680290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.680301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.680605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.680616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.680908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.680920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.681225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.681236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.681549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.681561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.681720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.681732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.682041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.682052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.682255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.682265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.682590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.682601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.682949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.682960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.683140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.683151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.683506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.683518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.683846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.683857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.684138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.684149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.684449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.684460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.684803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.684815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.685108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.685118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.685444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.685454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.685754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.685766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.686072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.686082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.686384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.686395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.686519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.686529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.686816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.686827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.687134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.687146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.687440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.687452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.687645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.687658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.687940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.687951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.688212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.688224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.688529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.688540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.688857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.688870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.689065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.689076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.689397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.689408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.689711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.689723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.690016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.690027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.690326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.767 [2024-10-09 11:18:46.690337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.767 qpair failed and we were unable to recover it. 00:38:26.767 [2024-10-09 11:18:46.690631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.690643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.690958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.690969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.691186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.691197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.691561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.691573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.691897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.691911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.692210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.692220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.692555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.692566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.692925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.692936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.693126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.693136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.693440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.693451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.693776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.693788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.694092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.694103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.694404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.694416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.694710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.694721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.695044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.695055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.695356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.695368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.695598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.695609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.695794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.695804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.696157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.696168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.696480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.696492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.696817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.696829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.697129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.697141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.697490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.697501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.697722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.697733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.697903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.697915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.698236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.698246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.698473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.698484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.698798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.698809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.699112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.699124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.699414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.699424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.699744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.699756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.699941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.699954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.700257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.700276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.700593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.700604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.700903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.700914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.701114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.701125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.701490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.701502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.701785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.701795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.702125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.702135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.702308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.702319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.768 qpair failed and we were unable to recover it. 00:38:26.768 [2024-10-09 11:18:46.702627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.768 [2024-10-09 11:18:46.702638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.702950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.702962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.703243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.703255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.703567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.703578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.703892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.703903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.704203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.704215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.704524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.704536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.704714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.704725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.704905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.704917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.705185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.705197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.705476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.705487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.705824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.705835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.706065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.706075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.706372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.706383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.706691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.706703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.707006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.707017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.707291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.707301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.707575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.707586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.707767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.707779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.708041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.708051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.708366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.708379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.708688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.708699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.709039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.709051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.709354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.709366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.709671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.709683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.709762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.709772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.710034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.710044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.710370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.710382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.710671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.710683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.710990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.711001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.711302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.711313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.711642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.711653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.711851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.711862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.712171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.712182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.712481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.712492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.712837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.712847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.713157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.713169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.713474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.713485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.713807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.713818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.714098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.714108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.714409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.769 [2024-10-09 11:18:46.714420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.769 qpair failed and we were unable to recover it. 00:38:26.769 [2024-10-09 11:18:46.714752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.714763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.715063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.715074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.715259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.715270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.715455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.715469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.715746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.715758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.716045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.716057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.716372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.716383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.716679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.716690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.716991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.717003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.717301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.717312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.717672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.717683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.717988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.717999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.718307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.718319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.718627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.718638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.718932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.718944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.719253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.719264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.719599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.719610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.719913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.719924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.720227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.720241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.720546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.720560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.720865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.720876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.721184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.721196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.721528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.721539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.721840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.721851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.722053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.722064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.722384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.722394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.722691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.722703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.722979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.722990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.723298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.723309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.723602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.723613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.723910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.723922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.724222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.724233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.724416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.724426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.724594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.724605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.724872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.724882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.725213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.725224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.725526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.725538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.725839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.725850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.726174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.726188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.726491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.726502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.770 [2024-10-09 11:18:46.726816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.770 [2024-10-09 11:18:46.726827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.770 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.727126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.727138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.727412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.727424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.727715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.727726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.728025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.728037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.728338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.728353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.728633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.728644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.729037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.729048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.729351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.729362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.729635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.729646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.729954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.729966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.730270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.730281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.730591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.730603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.730886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.730896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.731183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.731194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:26.771 [2024-10-09 11:18:46.731474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.771 [2024-10-09 11:18:46.731485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:26.771 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.731808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.731821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.732142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.732154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.732455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.732470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.732623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.732636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.732943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.732953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.733261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.733274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.733581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.733592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.733900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.733911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.734188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.734199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.734500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.734511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.734682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.734693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.734859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.734871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.735175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.735187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.735495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.735506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.735844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.735856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.736089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.736100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.736405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.736418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.736711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.736724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.736927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.736937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.737251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.737261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.737575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.737587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.737888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.737900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.738227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.738238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.738528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.738539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.738699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.738710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.739021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.739031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.739363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.739374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.739715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.739726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.740076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.740087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.740388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.740400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.740713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.740724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.741041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.741052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.741361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.741372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.741676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.741688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.741962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.741973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.742276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.742288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.063 [2024-10-09 11:18:46.742593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.063 [2024-10-09 11:18:46.742604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.063 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.742905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.742916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.743184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.743194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.743502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.743513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.743723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.743734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.743916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.743926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.744216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.744227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.744399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.744410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.744741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.744752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.745077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.745088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.745364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.745375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.745735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.745746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.746048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.746059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.746256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.746267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.746580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.746591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.746910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.746921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.747229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.747240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.747615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.747626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.747918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.747929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.748249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.748260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.748571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.748583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.748831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.748843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.749132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.749144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.749451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.749462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.749755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.749768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.750090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.750101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.750424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.750434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.750742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.750754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.751069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.751080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.751352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.751364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.751561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.751574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.751909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.751921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.752131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.752143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.752488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.752501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.752889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.752899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.753211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.753223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.753538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.753550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.753857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.753869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.754148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.754158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.754472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.754485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.754776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.064 [2024-10-09 11:18:46.754786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.064 qpair failed and we were unable to recover it. 00:38:27.064 [2024-10-09 11:18:46.755092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.755104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.755387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.755398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.755702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.755715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.756039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.756049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.756254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.756265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.756577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.756588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.756865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.756876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.757057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.757070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.757251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.757262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.757562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.757574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.757893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.757904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.758217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.758228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.758408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.758420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.758614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.758626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.758950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.758960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.759264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.759276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.759587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.759599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.759929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.759940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.760234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.760245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.760538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.760549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.760770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.760780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.761112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.761124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.761433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.761443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.761814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.761825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.762094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.762104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.762309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.762320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.762649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.762660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.762965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.762976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.763281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.763293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.763663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.763674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.763990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.764002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.764308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.764319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.764677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.764688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.764999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.765009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.765309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.765322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.765711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.765722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.765808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.765818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.766037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.766047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.766359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.766371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.766640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.065 [2024-10-09 11:18:46.766651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.065 qpair failed and we were unable to recover it. 00:38:27.065 [2024-10-09 11:18:46.766975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.766986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.767280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.767291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.767664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.767675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.767974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.767984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.768147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.768159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.768487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.768499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.768828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.768839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.769151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.769162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.769385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.769395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.769662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.769673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.769861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.769872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.770154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.770165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.770450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.770461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.770794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.770805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.771108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.771118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.771422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.771432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.771606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.771618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.771800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.771810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.772077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.772088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.772353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.772364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.772648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.772659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.772951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.772964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.773270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.773281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.773608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.773621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.773954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.773965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.774266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.774279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.774635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.774646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.774951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.774961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.775152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.775163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.775473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.775485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.775804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.775815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.776119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.776130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.776440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.776451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.776827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.776838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.777137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.777149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.777462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.777476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.777685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.777695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.778006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.778017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.778201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.778213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.778481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.778492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.066 [2024-10-09 11:18:46.778852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.066 [2024-10-09 11:18:46.778864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.066 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.779150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.779161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.779467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.779478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.779830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.779841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.780148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.780160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.780331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.780342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.780640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.780652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.780973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.780984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.781297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.781308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.781615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.781627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.781931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.781942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.782150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.782161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.782336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.782347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.782520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.782531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.782896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.782907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.783213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.783225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.783477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.783489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.783787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.783797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.784104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.784116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.784423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.784435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.784745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.784756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.785022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.785034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.785342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.785356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.785660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.785671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.785995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.786007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.786172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.786184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.786367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.786378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.786698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.786710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.787013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.787024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.787314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.787325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.787624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.787635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.787944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.787956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.788260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.788271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.788573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.788585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.788867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.067 [2024-10-09 11:18:46.788878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.067 qpair failed and we were unable to recover it. 00:38:27.067 [2024-10-09 11:18:46.789198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.789210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.789541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.789552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.789854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.789865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.790176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.790187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.790509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.790521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.790842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.790853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.791157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.791169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.791473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.791484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.791830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.791841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.792158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.792168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.792486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.792498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.792799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.792810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.792989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.793000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.793423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.793433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.793753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.793768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.793990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.794001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.794185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.794195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.794511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.794522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.794838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.794850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.794905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.794915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.795211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.795222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.795531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.795543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.795867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.795879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.796203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.796215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.796522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.796534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.796818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.796829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.797135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.797146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.797460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.797475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.797784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.797795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.798099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.798111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.798433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.798444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.798723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.798733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.799043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.799054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.799384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.799395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.799706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.799717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.800093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.800103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.800416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.800428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.800699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.800710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.801061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.801072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.801236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.801248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.801560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.068 [2024-10-09 11:18:46.801573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.068 qpair failed and we were unable to recover it. 00:38:27.068 [2024-10-09 11:18:46.801744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.801759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.802030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.802041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.802344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.802356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.802645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.802656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.802987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.802998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.803341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.803352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.803552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.803563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.803860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.803870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.804170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.804181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.804506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.804518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.804721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.804732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.804934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.804945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.805096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.805107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.805510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.805521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.805696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.805706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.806014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.806025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.806107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.806117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.806501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.806513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.806833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.806844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.807052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.807063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.807386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.807396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.807618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.807628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.807937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.807948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.808256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.808266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.808457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.808470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.808792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.808802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.809082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.809093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.809387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.809397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.809708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.809719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.810029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.810040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.810342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.810353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.810633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.810644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.810952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.810963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.811267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.811278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.811591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.811603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.811921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.811933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.812220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.812232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.812437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.812448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.812647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.812659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.813035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.813047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.069 [2024-10-09 11:18:46.813359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.069 [2024-10-09 11:18:46.813370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.069 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.813663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.813676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.813962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.813974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.814165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.814177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.814444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.814456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.814786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.814798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.815137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.815149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.815486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.815498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.815687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.815699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.815928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.815939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.816208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.816219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.816549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.816560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.816847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.816859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.817196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.817208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.817525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.817538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.817716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.817726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.818061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.818072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.818243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.818253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.818590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.818602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.818888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.818899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.819214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.819225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.819531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.819542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.819940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.819951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.820263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.820275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.820468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.820480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.820773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.820785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.821116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.821127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.821424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.821436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.821736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.821750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.822060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.822072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.822375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.822387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.822693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.822706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.822865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.822879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.823091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.823104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.823403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.823415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.823712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.823724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.824031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.824042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.824263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.824275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.824664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.824675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.824986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.824998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.825301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.825312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.070 qpair failed and we were unable to recover it. 00:38:27.070 [2024-10-09 11:18:46.825618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.070 [2024-10-09 11:18:46.825630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.825961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.825972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.826276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.826288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.826600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.826611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.826864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.826875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.827168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.827178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.827492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.827504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.827802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.827813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.828112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.828124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.828400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.828411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.828705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.828717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.829023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.829034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.829355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.829366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.829682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.829693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.830002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.830014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.830314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.830326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.830622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.830633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.830928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.830940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.831238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.831249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.831528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.831539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.831851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.831862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.832186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.832197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.832497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.832508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.832820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.832830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.833132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.833144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.833435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.833445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.833755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.833768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.834113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.834124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.834428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.834439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.834733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.834744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.835044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.835055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.835326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.835337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.835628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.835640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.835922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.835932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.836238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.836249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.836448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.836460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.071 [2024-10-09 11:18:46.836780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.071 [2024-10-09 11:18:46.836791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.071 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.837068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.837080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.837373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.837384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.837699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.837710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.838022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.838033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.838354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.838366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.838697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.838709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.839008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.839020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.839323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.839335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.839638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.839650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.839968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.839980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.840289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.840300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.840575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.840586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.840914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.840926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.841228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.841240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.841476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.841487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.841803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.841814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.842086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.842097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.842400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.842411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.842651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.842662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.842986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.842997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.843326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.843337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.843599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.843610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.843926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.843937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.844241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.844253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.844567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.844578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.844900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.844913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.845214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.845225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.845407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.845419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.845723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.845735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.846053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.846065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.846387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.846399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.846577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.846588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.846893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.846904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.847186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.847198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.847523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.847534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.847819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.847830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.848107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.848118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.848428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.848440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.848760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.848772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.849075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.072 [2024-10-09 11:18:46.849087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.072 qpair failed and we were unable to recover it. 00:38:27.072 [2024-10-09 11:18:46.849369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.849380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.849634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.849645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.849957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.849967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.850275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.850286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.850611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.850623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.850945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.850960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.851314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.851325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.851536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.851547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.851849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.851860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.852164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.852176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.852454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.852468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.852794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.852805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.853082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.853092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.853396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.853407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.853713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.853726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.853999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.854011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.854331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.854345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.854682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.854694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.854995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.855006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.855328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.855339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.855716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.855728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.856030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.856042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.856382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.856393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.856701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.856713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.856997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.857008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.857337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.857348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.857671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.857683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.857986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.857997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.858293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.858304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.858607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.858619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.858923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.858933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.859226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.859237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.859572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.859585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.859914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.859925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.860225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.860237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.860535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.860546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.860863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.860874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.861182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.861193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.861499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.861511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.861813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.861824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.861995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.073 [2024-10-09 11:18:46.862006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.073 qpair failed and we were unable to recover it. 00:38:27.073 [2024-10-09 11:18:46.862305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.862316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.862629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.862641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.862947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.862959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.863234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.863246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.863531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.863542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.863891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.863902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.864198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.864208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.864509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.864521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.864859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.864869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.865176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.865188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.865373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.865384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.865677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.865690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.865988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.865999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.866214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.866224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.866533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.866544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.866726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.866736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.867053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.867065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.867364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.867376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.867684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.867697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.868023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.868034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.868342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.868354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.868635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.868647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.868954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.868965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.869241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.869252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.869557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.869568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.869856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.869866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.870177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.870188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.870517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.870528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.870842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.870853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.871158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.871170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.871478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.871489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.871804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.871816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.872141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.872152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.872428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.872440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.872741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.872752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.873034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.873045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.873345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.873355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.873648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.873660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.873973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.873984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.874261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.874273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.074 [2024-10-09 11:18:46.874580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.074 [2024-10-09 11:18:46.874592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.074 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.874898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.874910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.875261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.875272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.875574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.875586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.875906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.875917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.876268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.876280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.876587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.876598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.876879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.876891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.877060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.877072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.877278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.877288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.877614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.877626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.877954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.877965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.878235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.878246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.878546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.878557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.878859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.878871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.879201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.879211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.879413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.879424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.879790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.879801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.880104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.880115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.880279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.880292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.880554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.880566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.880886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.880897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.881214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.881224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.881507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.881518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.881814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.881826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.882124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.882135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.882438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.882450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.882783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.882794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.883063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.883074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.883375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.883386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.883703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.883715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.884012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.884024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.884321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.884333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.884562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.884573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.884861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.884873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.885187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.885197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.885516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.885528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.885864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.885875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.886178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.886189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.886470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.886481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.886776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.075 [2024-10-09 11:18:46.886787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.075 qpair failed and we were unable to recover it. 00:38:27.075 [2024-10-09 11:18:46.887089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.887099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.887391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.887403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.887694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.887705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.888005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.888017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.888317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.888328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.888628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.888641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.888970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.888981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.889296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.889307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.889617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.889628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.889980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.889991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.890177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.890188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.890492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.890503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.890718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.890728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.891067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.891079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.891355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.891367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.891642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.891653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.891966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.891977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.892288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.892300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.892629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.892640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.892948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.892960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.893285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.893296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.893600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.893612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.893908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.893919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.894229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.894241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.894543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.894555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.894878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.894890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.895072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.895083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.895385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.895396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.895782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.895793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.896101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.896113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.896392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.896402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.896703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.896715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.897017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.897029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.897334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.897346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.897674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.897685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.076 qpair failed and we were unable to recover it. 00:38:27.076 [2024-10-09 11:18:46.897994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.076 [2024-10-09 11:18:46.898005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.898309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.898321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.898621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.898632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.898915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.898926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.899200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.899210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.899512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.899525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.899838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.899848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.900127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.900140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.900447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.900458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.900737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.900749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.901010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.901021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.901334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.901345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.901630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.901641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.901956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.901967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.902266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.902277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.902620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.902631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.902926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.902937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.903247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.903259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.903586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.903598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.903890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.903901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.904166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.904177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.904369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.904382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.904782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.904794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.905085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.905097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.905396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.905407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.905716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.905728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.906033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.906045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.906341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.906352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.906632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.906643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.906945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.906958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.907290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.907301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.907610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.907622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.907953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.907964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.908266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.908278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.908580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.908592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.908888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.908900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.909203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.909214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.909511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.909522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.909855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.909866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.910197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.910208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.910594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.077 [2024-10-09 11:18:46.910606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.077 qpair failed and we were unable to recover it. 00:38:27.077 [2024-10-09 11:18:46.910911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.910922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.911107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.911119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.911404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.911415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.911757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.911768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.912038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.912049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.912320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.912331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.912636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.912647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.912951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.912962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.913263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.913275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.913614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.913625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.913952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.913963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.914274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.914285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.914585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.914596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.914899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.914910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.915191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.915203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.915504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.915516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.915824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.915835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.916140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.916151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.916484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.916495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.916824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.916836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.917139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.917151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.917447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.917459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.917789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.917801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.918102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.918113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.918421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.918435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.918746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.918758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.919064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.919076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.919350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.919361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.919654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.919667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.919860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.919872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.920187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.920200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.920499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.920512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.920871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.920883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.921188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.921200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.921485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.921497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.921826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.921838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.922136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.922148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.922467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.922480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.922807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.922820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.923153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.923165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.078 [2024-10-09 11:18:46.923472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.078 [2024-10-09 11:18:46.923485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.078 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.923757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.923769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.924070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.924082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.924395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.924407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.924740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.924752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.924939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.924951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.925255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.925267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.925588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.925600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.925897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.925909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.926235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.926246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.926513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.926526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.926873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.926888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.927204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.927216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.927518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.927529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.927843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.927856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.928147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.928158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.928475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.928488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.928811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.928823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.929049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.929059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.929389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.929399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.929701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.929714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.929911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.929922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.930261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.930272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.930580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.930592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.930859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.930869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.931173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.931184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.931510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.931523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.931842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.931853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.932044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.932054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.932349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.932360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.932712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.932723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.933019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.933030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.933334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.933346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.933647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.933658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.933949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.933962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.934263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.934274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.934473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.934485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.934785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.934796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.935080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.935092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.935413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.935423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.935711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.079 [2024-10-09 11:18:46.935723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.079 qpair failed and we were unable to recover it. 00:38:27.079 [2024-10-09 11:18:46.936018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.936029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.936353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.936365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.936635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.936646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.936960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.936972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.937152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.937164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.937463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.937478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.937799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.937810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.938109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.938119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.938419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.938429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.938720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.938733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.939043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.939054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.939356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.939368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.939649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.939660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.939949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.939960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.940261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.940272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.940582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.940593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.940934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.940945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.941262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.941272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.941577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.941588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.941966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.941977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.942274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.942285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.942621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.942632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.942937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.942949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.943254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.943264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.943571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.943582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.943897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.943908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.944210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.944222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.944528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.944540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.944891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.944902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.945193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.945204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.945504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.945516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.945865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.945876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.946183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.946194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.946473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.946485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.946814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.946826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.947143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.947155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.080 qpair failed and we were unable to recover it. 00:38:27.080 [2024-10-09 11:18:46.947449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.080 [2024-10-09 11:18:46.947461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.947773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.947784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.947966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.947978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.948243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.948255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.948515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.948526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.948823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.948834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.949134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.949145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.949443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.949455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.949758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.949769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.950111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.950122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.950423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.950434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.950736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.950747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.951046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.951058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.951349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.951359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.951636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.951647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.951937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.951947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.952250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.952262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.952534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.952546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.952857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.952868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.953168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.953179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.953480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.953490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.953798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.953809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.954109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.954120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.954259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.954271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.954581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.954593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.954921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.954932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.955240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.955250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.955554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.955565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.955855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.955867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.956197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.956210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.956513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.956525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.956853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.956864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.957239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.957250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.957560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.957572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.957889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.957900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.958205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.958217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.958517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.958529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.958843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.958855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.959156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.959166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.959467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.959479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.081 [2024-10-09 11:18:46.959803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.081 [2024-10-09 11:18:46.959814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.081 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.960092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.960104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.960371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.960382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.960696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.960708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.961047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.961059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.961362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.961373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.961687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.961698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.962006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.962019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.962318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.962330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.962608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.962619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.963005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.963016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.963263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.963273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.963674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.963685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.963981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.963991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.964161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.964173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.964453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.964463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.964791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.964805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.965135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.965146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.965445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.965456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.965787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.965798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.966100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.966112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.966387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.966398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.966705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.966717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.967025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.967037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.967337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.967349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.967624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.967635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.967984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.967995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.968300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.968311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.968605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.968617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.968940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.968952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.969294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.969306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.969650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.969662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.969973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.969984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.970265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.970276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.970577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.970588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.970896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.970907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.971189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.971201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.971531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.971542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.971755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.971765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.972083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.972094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.972393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.082 [2024-10-09 11:18:46.972405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.082 qpair failed and we were unable to recover it. 00:38:27.082 [2024-10-09 11:18:46.972711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.972722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.973023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.973034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.973335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.973346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.973655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.973667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.973992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.974003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.974209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.974219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.974532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.974543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.974839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.974850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.975128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.975138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.975412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.975423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.975727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.975738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.975916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.975927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.976214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.976225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.976532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.976552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.976874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.976885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.977183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.977194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.977518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.977530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.977853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.977865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.978033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.978045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.978367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.978378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.978637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.978648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.978999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.979010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.979312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.979324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.979630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.979641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.979968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.979979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.980303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.980314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.980592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.980603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.980933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.980945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.981301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.981313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.981607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.981619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.981944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.981955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.982257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.982268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.982548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.982559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.982862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.982874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.983196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.983207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.983505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.983517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.983834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.983844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.984144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.984156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.984324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.984336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.984634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.984645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.984943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.083 [2024-10-09 11:18:46.984953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.083 qpair failed and we were unable to recover it. 00:38:27.083 [2024-10-09 11:18:46.985265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.985276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.985586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.985597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.985900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.985914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.986245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.986257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.986522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.986533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.986753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.986764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.987082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.987094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.987420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.987431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.987729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.987741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.988043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.988054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.988359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.988370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.988680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.988691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.988975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.988986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.989304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.989314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.989618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.989630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.989910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.989921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.990219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.990229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.990399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.990411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.990740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.990753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.991081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.991094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.991402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.991414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.991726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.991737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.992037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.992049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.992376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.992388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.992710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.992721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.993024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.993035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.993334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.993346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.993526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.993537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.993829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.993840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.994140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.994153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.994458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.994519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.994828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.994839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.995109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.995120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.995277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.995289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.995548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.995560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.995879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.084 [2024-10-09 11:18:46.995890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.084 qpair failed and we were unable to recover it. 00:38:27.084 [2024-10-09 11:18:46.996195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.996206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.996508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.996519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.996716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.996727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.997024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.997035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.997338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.997349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.997634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.997646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.997964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.997975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.998305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.998316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.998626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.998638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.998941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.998952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.999130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.999141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.999434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.999445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:46.999722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:46.999733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.000033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.000045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.000350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.000362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.000649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.000661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.000979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.000991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.001321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.001332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.001667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.001680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.001967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.001978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.002279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.002293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.002621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.002632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.002932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.002942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.003229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.003240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.003390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.003401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.003666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.003677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.003982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.003993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.004280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.004300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.004517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.004529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.004819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.004829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.005134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.005145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.005433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.005443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.005752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.005763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.006057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.006067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.006370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.006381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.006685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.006696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.007001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.007013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.007316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.007329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.007674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.007685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.007997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.008008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.085 [2024-10-09 11:18:47.008326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.085 [2024-10-09 11:18:47.008337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.085 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.008585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.008596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.008886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.008898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.009182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.009194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.009508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.009520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.009819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.009831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.010145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.010156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.010356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.010366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.010652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.010663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.010936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.010946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.011265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.011276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.011544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.011556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.011765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.011776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.012034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.012045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.012367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.012378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.012641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.012652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.012978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.012989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.013233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.013243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.013561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.013572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.013852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.013863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.014235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.014246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.014552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.014564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.014904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.014916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.015211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.015222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.015526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.015537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.015742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.015753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.016060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.016071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.016399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.016412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.016726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.016737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.017042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.017054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.017361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.017372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.017679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.017690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.018063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.018075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.018267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.018279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.018598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.018610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.018810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.018821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.019143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.019154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.019459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.019474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.019773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.019784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.020070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.020082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.020387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.020398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.086 qpair failed and we were unable to recover it. 00:38:27.086 [2024-10-09 11:18:47.020702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.086 [2024-10-09 11:18:47.020714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.021033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.021043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.021345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.021357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.021654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.021665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.021983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.021995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.022294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.022306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.022485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.022496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.022798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.022812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.023124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.023137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.023442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.023453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.023642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.023653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.023832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.023843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.024105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.024115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.024476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.024487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.024830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.024841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.025200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.025211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.025509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.025520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.025845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.025855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.026147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.026157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.026435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.026446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.026755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.026767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.027087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.027098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.027277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.027289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.027620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.027632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.027943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.027953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.028228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.028239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.028417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.028427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.028743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.028754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.028934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.028945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.029249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.029261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.029560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.029572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.029760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.029771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.030076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.030087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.030271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.030282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.030660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.030673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.030887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.030897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.031181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.031192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.031531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.031542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.031710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.031720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.031910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.031921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.032250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.087 [2024-10-09 11:18:47.032262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.087 qpair failed and we were unable to recover it. 00:38:27.087 [2024-10-09 11:18:47.032569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.032581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.032878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.032889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.033202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.033213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.033387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.033399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.033671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.033683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.033983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.033993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.034268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.034278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.088 [2024-10-09 11:18:47.034460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.088 [2024-10-09 11:18:47.034480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.088 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.034686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.034697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.035013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.035025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.035334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.035346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.035541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.035553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.035870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.035881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.036075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.036086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.036388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.036399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.036655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.036665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.036993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.037009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.037282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.037293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.037625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.037637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.037708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.037719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.038057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.038071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.038368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.038380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.038654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.038665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.038900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.038910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.039237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.039247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.039560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.039572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.039905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.039918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.040218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.040229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.040538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.040551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.040843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.040854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.041159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.041170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.041487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.041498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.041784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.041795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.042191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.042202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.042505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.042516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.042852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.042862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.043169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.043180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.043450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.043461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.043771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.043782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.044129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.044140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.044451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.044462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.044789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.437 [2024-10-09 11:18:47.044800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.437 qpair failed and we were unable to recover it. 00:38:27.437 [2024-10-09 11:18:47.045187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.045199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.045403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.045414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.045746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.045757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.046071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.046082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.046262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.046273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.046585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.046596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.046771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.046782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.046889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.046899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.047243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.047254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.047564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.047576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.047883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.047894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.048202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.048213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.048513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.048524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.048842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.048854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.049193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.049204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.049509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.049521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.049735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.049746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.049905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.049915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.050275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.050286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.050586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.050599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.050912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.050923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.051096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.051107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.051294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.051305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.051622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.051633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.051944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.051955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.052261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.052271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.052584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.052596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.052909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.052921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.053098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.053109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.053417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.053427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.053731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.053743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.053933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.053943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.054243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.054253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.054567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.054583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.054866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.054877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.055194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.055205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.055388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.055400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.055670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.055683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.055997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.056007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.056318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.056329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.438 [2024-10-09 11:18:47.056514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.438 [2024-10-09 11:18:47.056526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.438 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.056788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.056799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.057157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.057168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.057412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.057423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.057735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.057746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.058068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.058079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.058420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.058435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.058750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.058762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.059031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.059042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.059348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.059360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.059642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.059653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.059956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.059967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.060266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.060277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.060590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.060601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.060880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.060892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.061207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.061219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.061528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.061540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.061828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.061840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.062154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.062165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.062355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.062366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.062550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.062561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.062835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.062846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.063150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.063161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.063417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.063427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.063721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.063732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.064047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.064058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.064244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.064255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.064571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.064582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.064903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.064913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.065216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.065228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.065529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.065541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.065734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.065745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.066047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.066058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.066357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.066371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.066505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.066518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.066808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.066818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.067153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.067164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.067468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.067479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.067784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.067795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.068096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.068108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.068431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.068442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.439 qpair failed and we were unable to recover it. 00:38:27.439 [2024-10-09 11:18:47.068744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.439 [2024-10-09 11:18:47.068756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.068994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.069006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.069320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.069332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.069633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.069645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.069974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.069985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.070293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.070304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.070607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.070619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.070897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.070909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.071214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.071225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.071525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.071537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.071708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.071720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.071900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.071911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.072206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.072217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.072528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.072540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.072869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.072880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.073203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.073214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.073520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.073534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.073673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.073685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.073953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.073964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.074331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.074342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.074666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.074678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.075057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.075068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.075378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.075389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.075577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.075589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.075908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.075920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.076232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.076243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.076519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.076530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.076849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.076859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.077164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.077175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.077482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.077494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.077835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.077846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.078127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.078139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.078441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.078452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.078795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.078807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.079101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.079112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.079437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.079448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.079663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.079674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.079961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.079972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.080275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.080286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.080604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.080616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.080890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.440 [2024-10-09 11:18:47.080900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.440 qpair failed and we were unable to recover it. 00:38:27.440 [2024-10-09 11:18:47.081245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.081256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.081572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.081585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.081876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.081887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.082192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.082204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.082509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.082521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.082827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.082837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.083128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.083140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.083440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.083452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.083758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.083770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.084070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.084082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.084229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.084241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.084577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.084588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.084853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.084864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.085173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.085184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.085472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.085483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.085774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.085784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.086054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.086065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.086269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.086280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.086556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.086566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.086909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.086922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.087225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.087236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.087527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.087538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.087715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.087726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.088012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.088023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.088324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.088335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.088643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.088654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.088936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.088948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.089244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.089255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.089559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.089571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.089868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.089879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.090184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.090195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.090507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.090519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.090863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.090874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.091181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.091192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.441 [2024-10-09 11:18:47.091495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.441 [2024-10-09 11:18:47.091507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.441 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.091833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.091844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.092133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.092145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.092340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.092351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.092668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.092679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.092970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.092982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.093286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.093297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.093598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.093610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.093938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.093948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.094243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.094254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.094564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.094576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.094760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.094771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.095077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.095091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.095400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.095410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.095612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.095623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.095928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.095939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.096289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.096300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.096596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.096607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.096910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.096921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.097227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.097238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.097576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.097588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.097890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.097902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.098206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.098217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.098401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.098412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.098577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.098589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.098924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.098935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.099243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.099255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.099530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.099541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.099852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.099864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.100128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.100140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.100470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.100481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.100806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.100817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.101096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.101106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.101414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.101425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.101736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.101747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.102046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.102057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.102338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.102350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.102634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.102645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.102805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.102816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.103008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.103020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.103312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.103324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.442 [2024-10-09 11:18:47.103622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.442 [2024-10-09 11:18:47.103634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.442 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.103844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.103854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.104178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.104188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.104481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.104493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.104817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.104829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.105134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.105146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.105425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.105436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.105766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.105778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.106075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.106086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.106386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.106397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.106695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.106706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.107059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.107070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.107368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.107379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.107683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.107694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.108001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.108012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.108300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.108312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.108614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.108625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.108808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.108819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.109124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.109135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.109479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.109490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.109814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.109824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.110054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.110065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.110372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.110383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.110682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.110694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.111002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.111013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.111307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.111319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.111643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.111654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.111932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.111944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.112245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.112255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.112552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.112563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.112797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.112808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.113130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.113142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.113445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.113456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.113758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.113770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.114038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.114049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.114327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.114339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.114625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.114635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.114961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.114972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.115273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.115284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.115593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.115607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.115938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.115949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.443 qpair failed and we were unable to recover it. 00:38:27.443 [2024-10-09 11:18:47.116246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.443 [2024-10-09 11:18:47.116258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.116562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.116572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.116851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.116870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.117192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.117203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.117508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.117519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.117822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.117833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.118131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.118142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.118444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.118456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.118781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.118793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.119089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.119101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.119382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.119394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.119729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.119741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.120045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.120057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.120365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.120377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.120627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.120639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.120953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.120965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.121263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.121275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.121594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.121606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.122492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.122515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.122824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.122836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.123204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.123216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.123383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.123395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.123611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.123622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.123836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.123847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.124173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.124184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.124486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.124499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.124874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.124886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.125183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.125195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.125514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.125526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.125815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.125826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.125923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.125933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.126210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.126221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.126518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.126530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.126856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.126867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.127194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.127205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.127531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.127542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.127903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.127914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.128222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.128233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.128530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.128541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.128891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.128903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.444 qpair failed and we were unable to recover it. 00:38:27.444 [2024-10-09 11:18:47.129199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.444 [2024-10-09 11:18:47.129209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.129521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.129532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.129847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.129857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.130166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.130176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.130363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.130374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.130580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.130592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.130885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.130896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.131201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.131213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.131531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.131543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.131853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.131865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.132144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.132154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.132458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.132473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.132814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.132828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.133056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.133066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.133373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.133384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.133704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.133716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.134019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.134029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.134341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.134352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.134634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.134644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.134959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.134970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.135265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.135276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.135574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.135586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.135782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.135793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.136123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.136135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.136430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.136442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.136718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.136729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.137020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.137032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.137345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.137356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.137678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.137690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.137885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.137896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.138059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.138071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.138383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.445 [2024-10-09 11:18:47.138395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.445 qpair failed and we were unable to recover it. 00:38:27.445 [2024-10-09 11:18:47.138748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.138760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.139068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.139079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.139362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.139374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.139744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.139756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.140064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.140076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.140339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.140351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.140694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.140705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.141023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.141034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.141333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.141344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.141541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.141552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.141893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.141904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.142205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.142217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.142529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.142540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.142867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.142878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.143209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.143220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.143528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.143540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.143858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.143870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.144101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.144112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.144450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.144460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.144804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.144816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.145144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.145155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.145459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.145473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.145604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.145615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.145959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.145969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.146275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.146286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.146483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.146496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.146788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.146799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.147106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.147117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.147445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.147456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.147766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.147778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.148087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.148099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.148403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.148414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.148605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.148618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.148944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.148956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.149287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.149299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.149628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.149639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.149978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.149989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.150296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.150306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.150484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.150495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.150792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.150803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.446 [2024-10-09 11:18:47.151113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.446 [2024-10-09 11:18:47.151124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.446 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.151421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.151433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.151624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.151636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.151959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.151971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.152271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.152282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.152568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.152580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.152892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.152903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.153206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.153218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.153528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.153547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.153879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.153891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.154182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.154193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.154502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.154515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.154818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.154829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.155132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.155145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.155471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.155483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.155765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.155778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.156088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.156099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.156410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.156425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.156713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.156725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.157028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.157040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.157373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.157385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.157683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.157695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.158028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.158040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.158347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.158359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.158645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.158658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.158941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.158952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.159281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.159292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.159615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.159626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.159945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.159955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.160139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.160150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.160478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.160489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.160848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.160860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.161183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.161195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.161523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.161534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.161847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.161859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.162159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.162172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.162495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.162507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.162911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.162923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.163217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.163228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.163417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.163428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.163719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.447 [2024-10-09 11:18:47.163730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.447 qpair failed and we were unable to recover it. 00:38:27.447 [2024-10-09 11:18:47.164005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.164016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.164345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.164356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.164679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.164690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.164960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.164971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.165278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.165290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.165613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.165625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.165967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.165978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.166260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.166270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.166586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.166598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.166903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.166915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.167214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.167227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.167524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.167535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.167874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.167885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.168211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.168223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.168508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.168519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.168836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.168848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.169132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.169144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.169428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.169439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.169635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.169648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.169986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.169997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.170303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.170315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.170647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.170659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.170983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.170994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.171296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.171307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.171639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.171651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.171961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.171972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.172269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.172281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.172496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.172507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.172780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.172791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.173070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.173082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.173388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.173400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.174207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.448 [2024-10-09 11:18:47.174231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.448 qpair failed and we were unable to recover it. 00:38:27.448 [2024-10-09 11:18:47.174554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.174568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.174879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.174890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.175198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.175209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.175401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.175414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.175705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.175717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.176025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.176037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.176339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.176352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.176679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.176690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.177039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.177050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.177348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.177358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.177553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.177572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.177888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.177900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.178214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.178225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.178537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.178550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.178757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.178769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.179134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.179145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.179439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.179450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.179762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.179773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.180071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.180083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.180380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.180391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.180709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.180721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.181014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.181024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.181323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.181335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.181665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.181676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.182010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.182021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.182316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.182327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.182631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.182642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.182926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.182938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.183220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.183232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.183539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.183550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.449 [2024-10-09 11:18:47.183858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.449 [2024-10-09 11:18:47.183871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.449 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.184167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.184178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.184458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.184473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.184783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.184794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.185099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.185111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.185420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.185431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.185740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.185752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.185983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.185994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.186214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.186225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.186535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.186547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.186910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.186921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.187273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.187284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.187600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.187612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.187937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.187948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.188279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.188290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.188474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.188486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.188809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.188821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.188991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.189003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.189334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.189345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.189662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.189674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.189982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.189993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.190186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.190196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.190388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.190401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.190720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.190731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.191046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.191058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.191362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.450 [2024-10-09 11:18:47.191373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.450 qpair failed and we were unable to recover it. 00:38:27.450 [2024-10-09 11:18:47.191682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.191694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.191972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.191986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.192310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.192321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.192626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.192637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.192944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.192956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.193261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.193272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.193636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.193647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.193932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.193942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.194104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.194116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.194501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.194512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.194834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.194846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.195152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.195163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.195498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.195510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.195810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.195821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.196248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.196259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.196568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.196580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.196901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.196912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.197233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.197244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.197575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.197586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.197877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.197888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.198178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.198190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.198400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.198411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.198726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.198737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.199038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.199050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.199376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.199387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.199703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.199714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.200017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.200029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.200335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.200347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.200559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.200572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.451 [2024-10-09 11:18:47.200895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.451 [2024-10-09 11:18:47.200907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.451 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.201210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.201221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.201536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.201548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.201878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.201888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.202196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.202208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.202530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.202542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.202913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.202924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.203219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.203231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.203533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.203545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.203865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.203877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.203990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.204000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.204277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.204288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.204595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.204606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.204926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.204938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.205245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.205256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.205570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.205582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.205894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.205906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.206191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.206202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.206507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.206519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.206804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.206815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.207134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.207146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.207533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.207544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.207835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.207846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.208165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.208175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.208454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.208469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.208679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.208690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.209010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.209021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.209333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.209345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.209642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.209654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.209853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.209863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.452 [2024-10-09 11:18:47.210143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.452 [2024-10-09 11:18:47.210154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.452 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.210362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.210373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.210693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.210704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.211001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.211012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.211339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.211349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.211645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.211656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.211973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.211984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.212282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.212293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.212617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.212628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.212979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.212990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.213315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.213327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.213703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.213715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.214022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.214034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.214345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.214357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.214546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.214557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.214936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.214948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.215278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.215289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.215496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.215508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.215727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.215738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.216069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.216081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.216357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.216368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.216705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.216716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.217044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.217056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.217259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.217269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.217573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.217585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.217781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.217793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.218164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.218175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.218476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.218487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.218853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.218864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.219163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.219173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.219486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.453 [2024-10-09 11:18:47.219497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.453 qpair failed and we were unable to recover it. 00:38:27.453 [2024-10-09 11:18:47.219710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.219721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.220000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.220010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.220327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.220338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.220642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.220653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.221024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.221036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.221314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.221326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.221657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.221670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.221845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.221856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.222060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.222071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.222392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.222403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.222791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.222802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.223114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.223125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.223394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.223405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.223731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.223743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.224056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.224067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.224371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.224382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.224587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.224598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.224892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.224903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.225253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.225264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.225570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.225581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.225874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.225884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.226182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.226194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.226365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.226377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.226675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.226686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.226920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.226931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.227137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.227147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.227449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.227460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.227670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.227681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.227997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.228009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.228332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.228344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.228522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.454 [2024-10-09 11:18:47.228532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.454 qpair failed and we were unable to recover it. 00:38:27.454 [2024-10-09 11:18:47.228806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.228817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.228875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.228886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.229175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.229188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.229506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.229518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.229811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.229822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.230121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.230132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.230463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.230477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.230819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.230829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.231133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.231145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.231443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.231454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.231663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.231673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.231996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.232006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.232313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.232326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.232626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.232638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.232997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.233008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.233234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.233245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.233446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.233457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.233634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.233645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.233932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.233943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.234252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.234264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.234620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.234632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.234833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.234844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.235144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.235155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.235423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.235433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.235757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.235768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.236097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.236108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.236415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.236425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.236728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.236740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.237036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.237047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.237344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.455 [2024-10-09 11:18:47.237355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.455 qpair failed and we were unable to recover it. 00:38:27.455 [2024-10-09 11:18:47.237713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.237724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.237997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.238008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.238313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.238323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.238645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.238657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.238961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.238972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.239276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.239288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.239592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.239604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.239912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.239923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.240234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.240245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.240519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.240530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.240899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.240910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.241241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.241253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.241552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.241563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.241852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.241864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.242222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.242233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.242548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.242560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.242876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.242887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.243203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.243214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.243544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.243555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.243854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.243866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.244170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.244180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.244488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.244508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.244797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.244807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.245146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.245157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.245431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.245442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.245729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.245740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.245947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.245957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.246273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.246284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.456 [2024-10-09 11:18:47.246556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.456 [2024-10-09 11:18:47.246567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.456 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.246838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.246849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.247165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.247177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.247342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.247355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.247652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.247663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.247982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.247994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.248186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.248197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.248491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.248503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.248787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.248799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.249099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.249111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.249380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.249391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.249705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.249717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.250028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.250042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.250364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.250374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.250694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.250705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.251011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.251022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.251171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.251182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.251361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.251372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.251672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.251684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.251889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.251899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.252215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.252226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.252564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.252576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.252859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.252870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.253182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.253194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.253519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.253530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.253819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.253831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.254117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.254128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.254456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.254478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.254788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.254799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.255134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.255145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.255355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.255365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.255738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.255749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.256064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.256076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.256439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.256450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.256681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.256692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.257070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.257082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.257381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.257393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.257722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.257733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.258027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.258038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.258460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.258476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.258778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.457 [2024-10-09 11:18:47.258790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.457 qpair failed and we were unable to recover it. 00:38:27.457 [2024-10-09 11:18:47.259094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.259106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.259210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.259220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.259420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.259433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.259753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.259764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.260055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.260067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.260378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.260390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.260625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.260636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.260846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.260857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.261141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.261153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.261462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.261482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.261788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.261799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.262131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.262142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.262454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.262468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.262797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.262808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.263104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.263115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.263396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.263407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.263716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.263728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.264044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.264055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.264361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.264373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.264673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.264684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.264882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.264892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.265188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.265200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.265508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.265519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.265740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.265751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.265920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.265931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.266221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.266234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.266542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.266553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.266771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.266782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.267109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.267120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.267320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.267331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.267662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.267674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.267999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.268010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.268308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.268320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.268547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.268559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.268859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.268871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.269078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.269089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.269392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.269403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.269710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.269722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.270023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.270035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.270370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.270381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.458 [2024-10-09 11:18:47.270686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.458 [2024-10-09 11:18:47.270698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.458 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.270977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.270988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.271353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.271364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.271671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.271683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.272010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.272021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.272321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.272333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.272514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.272526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.272873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.272883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.273182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.273194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.273588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.273600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.273893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.273904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.274082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.274093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.274402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.274413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.274717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.274729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.274903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.274913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.275229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.275240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.275546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.275557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.275885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.275896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.276092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.276103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.276396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.276408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.276729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.276741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.277046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.277057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.277416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.277428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.277701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.277712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.278034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.278045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.278348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.278359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.278640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.278653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.278959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.278971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.279163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.279174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.279502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.279514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.279812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.279822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.280010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.280021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.280344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.280354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.280553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.280563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.280754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.280765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.281066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.281077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.281281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.281293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.281502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.281516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.281727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.281738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.281931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.281941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.282265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.282276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.459 qpair failed and we were unable to recover it. 00:38:27.459 [2024-10-09 11:18:47.282582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.459 [2024-10-09 11:18:47.282594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.282904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.282915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.283191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.283201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.283369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.283380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.283552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.283563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.283897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.283908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.284240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.284251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.284470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.284481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.284772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.284784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.285091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.285102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.285430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.285442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.285759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.285771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.286084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.286097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.286404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.286415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.286815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.286826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.287141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.287153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.287458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.287478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.287801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.287812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.288100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.288112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.288417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.288429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.288718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.288729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.289036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.289048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.289405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.289417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.289737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.289748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.290023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.290035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.290338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.290350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.290644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.290656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.290937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.290949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.291266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.291277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.291596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.291607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.291920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.291932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.292234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.292245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.292537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.292549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.292602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.292614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.292907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.292919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.293226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.293237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.293538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.293549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.293748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.293759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.294086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.294098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.294405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.294421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.294739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.460 [2024-10-09 11:18:47.294751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.460 qpair failed and we were unable to recover it. 00:38:27.460 [2024-10-09 11:18:47.295054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.295065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.295403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.295414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.295738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.295750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.296050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.296062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.296383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.296395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.296719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.296730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.296901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.296912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.297194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.297205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.297506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.297518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.297875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.297887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.297937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.297948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.298228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.298240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.298558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.298570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.298859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.298869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.299063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.299074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.299369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.299379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.299692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.299704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.300018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.300029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.300083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.300094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.300286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.300297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.300603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.300615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.300941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.300953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.301272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.301283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.301576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.301587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.301922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.301933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.302236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.302247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.302561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.302573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.302869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.302880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.303184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.303194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.303500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.303510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.303609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.303619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.303746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.303758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.304083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.304094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.304412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.304424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.304722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.304733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.461 [2024-10-09 11:18:47.304938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.461 [2024-10-09 11:18:47.304949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.461 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.305281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.305292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.305598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.305609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.305818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.305830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.306177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.306189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.306354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.306367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.306534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.306547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.306828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.306839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.307157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.307169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.307477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.307489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.307844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.307855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.308125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.308135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.308514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.308525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.308822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.308832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.309051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.309061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.309371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.309381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.309684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.309694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.310017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.310029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.310344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.310356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.310772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.310784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.311096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.311108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.311424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.311435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.311752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.311764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.312030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.312041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.312318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.312328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.312475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.312487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.312790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.312801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.313081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.313093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.313418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.313429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.313735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.313747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.314049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.314060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.314370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.314384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.314698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.314710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.314886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.314897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.315205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.315218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.315497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.315510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.315829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.315840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.316143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.316155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.316427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.316438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.316767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.316780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.317106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.317118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.462 [2024-10-09 11:18:47.317427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.462 [2024-10-09 11:18:47.317439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.462 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.317759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.317770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.318085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.318096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.318423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.318434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.318779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.318791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.319088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.319099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.319407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.319419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.319738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.319750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.320073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.320085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.320384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.320395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.320588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.320600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.320868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.320879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.321201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.321213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.321520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.321532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.321856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.321867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.322158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.322169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.322341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.322351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.322641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.322656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.322965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.322976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.323173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.323184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.323495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.323507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.323806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.323817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.324120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.324132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.324455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.324476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.324795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.324806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.325119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.325130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.325422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.325433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.325746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.325758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.326083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.326094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.326394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.326406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.326697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.326708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.327038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.327049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.327355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.327365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.327645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.327656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.327967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.327979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.328329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.328341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.328665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.328676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.328872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.328883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.329204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.329215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.329572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.329584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.329747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.463 [2024-10-09 11:18:47.329758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.463 qpair failed and we were unable to recover it. 00:38:27.463 [2024-10-09 11:18:47.330022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.330035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.330380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.330391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.330607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.330618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.330829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.330844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.331148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.331159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.331435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.331446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.331762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.331773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.332075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.332087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.332401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.332412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.332748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.332760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.333084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.333095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.333402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.333414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.333809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.333821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.334121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.334132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.334468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.334480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.334756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.334767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.335007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.335018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.335212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.335223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.335541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.335552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.335860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.335874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.336178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.336192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.336376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.336388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.336628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.336639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.336976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.336988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.337351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.337363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.337637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.337650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.337933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.337944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.338230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.338242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.338566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.338578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.338907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.338918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.339244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.339256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.339539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.339551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.339867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.339878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.340079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.340089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.340415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.340426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.340639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.340650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.340972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.340984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.341284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.341295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.341584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.341596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.341868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.464 [2024-10-09 11:18:47.341879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.464 qpair failed and we were unable to recover it. 00:38:27.464 [2024-10-09 11:18:47.342188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.342199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.342506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.342518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.342812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.342822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.343099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.343110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.343410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.343423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.343726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.343738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.344022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.344034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.344317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.344330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.344632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.344644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.344940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.344952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.345254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.345265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.345592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.345604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.345943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.345954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.346251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.346262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.346563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.346574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.346906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.346918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.347218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.347229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.347561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.347572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.347771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.347782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.348119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.348131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.348319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.348331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.348615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.348634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.348959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.348970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.349289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.349300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.349588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.349600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.349922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.349933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.350242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.350252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.350541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.350552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.350872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.350884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.351203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.351214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.351508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.351519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.351838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.351852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.352148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.352159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.465 [2024-10-09 11:18:47.352460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.465 [2024-10-09 11:18:47.352475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.465 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.352801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.352813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.353141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.353153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.353449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.353461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.353787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.353798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.354069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.354080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.354264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.354277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.354602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.354614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.354947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.354959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.355247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.355258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.355537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.355548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.355858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.355869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.356175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.356186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.356352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.356364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.356652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.356663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.356857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.356868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.357176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.357186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.357559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.357571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.357859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.357871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.358175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.358187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.358499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.358511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.358820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.358831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.359105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.359115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.359390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.359400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.359570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.359583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.359870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.359883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.360204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.360216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.360518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.360530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.360824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.360835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.361109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.361120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.361401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.361413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.361613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.361624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.361952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.361964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.362170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.362182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.362490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.362503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.362804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.362815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.363008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.363018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.363344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.363355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.363630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.363641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.363952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.363963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.364308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.364319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.466 qpair failed and we were unable to recover it. 00:38:27.466 [2024-10-09 11:18:47.364642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.466 [2024-10-09 11:18:47.364653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.364930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.364941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.365233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.365245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.365577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.365588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.365925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.365936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.366257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.366268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.367377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.367401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.367749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.367762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.368094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.368105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.368441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.368453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.368757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.368769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.369141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.369152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.369347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.369359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.369641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.369652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.370005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.370016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.370346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.370358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.370683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.370694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.370999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.371010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.371188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.371200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.371474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.371487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.371804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.371815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.372039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.372050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.372360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.372371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.372654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.372666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.372963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.372974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.373281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.373293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.373608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.373619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.373811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.373823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.374118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.374129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.374443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.374455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.374663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.374674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.375006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.375017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.375341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.375352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.375556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.375567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.375888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.375898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.376177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.376188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.376548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.376559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.376926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.376937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.377229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.377239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.377584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.467 [2024-10-09 11:18:47.377595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.467 qpair failed and we were unable to recover it. 00:38:27.467 [2024-10-09 11:18:47.377930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.377941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.378250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.378261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.378502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.378513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.378850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.378861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.379192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.379204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.379527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.379538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.379851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.379862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.380180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.380191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.380481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.380492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.380797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.380808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.381023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.381033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.381411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.381422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.381753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.381767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.381953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.381966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.382154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.382165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.382483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.382495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.382765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.382776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.383081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.383092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.383404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.383416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.383623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.383636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.383913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.383924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.384221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.384231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.384537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.384549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.384850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.384862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.385169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.385180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.385409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.385420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.385709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.385720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.386035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.386047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.386362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.386373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.386539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.386551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.386947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.386959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.387250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.387261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.387479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.387491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.387573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.387583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.387913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.387924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.388234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.388245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.388481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.388492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.388716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.388728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.388918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.388930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.389122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.389135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.389413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.468 [2024-10-09 11:18:47.389424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.468 qpair failed and we were unable to recover it. 00:38:27.468 [2024-10-09 11:18:47.389730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.389741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.389891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.389904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.390082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.390092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.390405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.390416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.390743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.390754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.391034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.391045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.391354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.391366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.391440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.391451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.391731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.391742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.392071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.392083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.392295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.392306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.392596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.392607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.392802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.392813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.393134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.393145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.393430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.393441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.393758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.393769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.394080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.394092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.394297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.394308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.394493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.394504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.394842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.394853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.395041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.395051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.395238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.395249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.395454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.395469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.395700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.395711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.396048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.396059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.396389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.396403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.396726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.396738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.397042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.397054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.397312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.397323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.397627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.397640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.397932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.397943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.398252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.398265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.398550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.398562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.398896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.398907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.399234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.399245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.399566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.399576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.469 qpair failed and we were unable to recover it. 00:38:27.469 [2024-10-09 11:18:47.399879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.469 [2024-10-09 11:18:47.399890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.400191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.400203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.400514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.400525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.400625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.400635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.400843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.400853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.401177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.401189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.401373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.401384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.401675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.401686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.402014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.402024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.402207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.402219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.402511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.402522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.402730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.402741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.403062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.403073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.403381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.403392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.403604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.403615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.403882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.403893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.404216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.404226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.404456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.404476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.404663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.404674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.405010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.405022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.405230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.405241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.405617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.405628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.405906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.405917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.406229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.406240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.406441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.406451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.406762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.406773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.407111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.407121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.407426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.407438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.407749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.407760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.408058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.408070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.408403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.408417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.408601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.408612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.408945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.408956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.409263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.409275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.409585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.409596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.409915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.409927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.410243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.410254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.410616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.410627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.410960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.410970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.411297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.411308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.411592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.470 [2024-10-09 11:18:47.411603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.470 qpair failed and we were unable to recover it. 00:38:27.470 [2024-10-09 11:18:47.411926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.471 [2024-10-09 11:18:47.411937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.471 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.412243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.412256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.412462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.412477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.412690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.412700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.413005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.413015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.413298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.413308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.413621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.413633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.413975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.413985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.414346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.414357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.414628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.414639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.414943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.414954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.415166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.415177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.415533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.415544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.415899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.415910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.416081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.416092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.416289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.416302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.416715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.416728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.416816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.416827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.417178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.417207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.417691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.417720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.418043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.418052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.418256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.418264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.748 qpair failed and we were unable to recover it. 00:38:27.748 [2024-10-09 11:18:47.418686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.748 [2024-10-09 11:18:47.418716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.418913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.418922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.419257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.419265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.419476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.419485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.419815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.419823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.420182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.420189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.420506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.420515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.420828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.420836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.421136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.421146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.421334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.421343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.421654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.421662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.421956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.421964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.422275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.422283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.422580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.422588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.422920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.422929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.423012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.423020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.423176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.423185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.423553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.423561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.423870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.423879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.424156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.424164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.424482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.424490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.424741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.424749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.425055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.425063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.425380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.425388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.425765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.425773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.426103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.426112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.426287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.426294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.426395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.426401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.426770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.426778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.426957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.426965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.427261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.427269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.427586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.427594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.427930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.427938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.428243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.428251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.428492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.428501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.428800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.428809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.429159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.429167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.429443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.429451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.429770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.429778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.430055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.430063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.749 qpair failed and we were unable to recover it. 00:38:27.749 [2024-10-09 11:18:47.430391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.749 [2024-10-09 11:18:47.430400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.430710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.430718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.430928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.430936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.431112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.431119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.431429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.431438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.431725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.431733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.432050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.432060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.432247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.432256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.432551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.432559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.432748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.432756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.433119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.433127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.433337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.433345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.433544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.433552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.433837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.433846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.434161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.434170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.434469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.434477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.434792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.434800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.435029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.435037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.435275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.435283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.435657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.435665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.436001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.436010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.436179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.436187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.436519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.436528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.436863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.436871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.437188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.437196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.437449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.437457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.437775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.437783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.438048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.438056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.438260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.438267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.438582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.438590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.438917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.438925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.439222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.439231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.439536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.439544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.439739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.439746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.439852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.439861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.440175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.440184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.440506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.440515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.440818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.440826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.441044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.441052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.441223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.441231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.750 [2024-10-09 11:18:47.441518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.750 [2024-10-09 11:18:47.441526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.750 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.441811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.441819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.442072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.442081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.442383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.442391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.442670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.442679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.442988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.442997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.443303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.443311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.443634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.443643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.443996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.444005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.444183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.444191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.444581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.444589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.444902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.444912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.445203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.445211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.445527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.445536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.445807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.445816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.446134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.446143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.446431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.446439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.446747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.446756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.447104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.447113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.447319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.447327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.447649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.447657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.447983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.447992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.448258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.448266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.448534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.448543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.448831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.448839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.449168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.449175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.449473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.449481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.449757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.449765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.449954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.449961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.450143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.450151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.450458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.450469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.450703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.450710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.450919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.450927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.451206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.451214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.451428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.451438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.451630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.451638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.451817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.451825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.452128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.452137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.452458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.452469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.452780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.452788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.751 qpair failed and we were unable to recover it. 00:38:27.751 [2024-10-09 11:18:47.453084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.751 [2024-10-09 11:18:47.453092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.453437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.453445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.453735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.453743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.453936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.453944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.454229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.454237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.454426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.454434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.454715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.454723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.455059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.455067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.455415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.455424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.455710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.455718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.455888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.455896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.456083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.456091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.456310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.456319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.456636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.456644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.456860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.456867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.457148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.457156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.457360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.457368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.457706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.457714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.458018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.458026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.458332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.458340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.458701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.458709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.459026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.459034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.459424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.459432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.459881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.459890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.460250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.460258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.460479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.460488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.460838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.460846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.461061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.461069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.461352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.461361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.461709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.461717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.462028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.462035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.462319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.462327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.462493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.462501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.462897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.462905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.752 [2024-10-09 11:18:47.463226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.752 [2024-10-09 11:18:47.463236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.752 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.463537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.463545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.463964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.463973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.464290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.464299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.464385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.464393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.464727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.464735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.465004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.465012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.465326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.465333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.465566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.465574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.465789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.465797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.466117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.466125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.466322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.466330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.466635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.466644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.466852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.466860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.467147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.467156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.467461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.467472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.467779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.467787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.468098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.468106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.468411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.468419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.468734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.468742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.469054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.469062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.469252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.469260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.469560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.469568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.469878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.469886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.470189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.470197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.470521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.470529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.470851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.470859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.471167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.471175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.471469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.471477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.471777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.471784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.472058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.472067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.472256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.472265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.472531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.472539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.472837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.472845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.473170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.473178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.473488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.473497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.473595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.473603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.473811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.473819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.474124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.474133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.474445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.474453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.753 [2024-10-09 11:18:47.474762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.753 [2024-10-09 11:18:47.474775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.753 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.475079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.475087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.475500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.475508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.475817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.475825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.476029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.476038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.476326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.476335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.476735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.476743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.477041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.477049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.477352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.477360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.477556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.477564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.477744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.477751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.477961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.477969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.478303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.478312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.478622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.478630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.478953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.478962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.479270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.479278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.479434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.479442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.479625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.479633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.479911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.479919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.480136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.480144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.480414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.480423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.480636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.480645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.480943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.480952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.481274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.481282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.481613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.481621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.481797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.481804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.482123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.482131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.482441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.482449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.482747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.482755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.482952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.482960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.483225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.483234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.483581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.483589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.483920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.483929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.484058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.484066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.484360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.484367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.484667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.484675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.484997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.485005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.485278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.485286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.485495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.485503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.485782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.485790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.754 [2024-10-09 11:18:47.486111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.754 [2024-10-09 11:18:47.486118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.754 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.486461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.486474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.486866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.486874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.487196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.487212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.487534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.487542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.487837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.487846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.488028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.488036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.488296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.488304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.488597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.488605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.488934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.488942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.489105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.489113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.489397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.489405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.489777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.489785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.490093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.490101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.490412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.490421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.490756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.490765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.491168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.491176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.491488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.491496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.491789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.491797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.492100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.492108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.492262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.492272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.492553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.492562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.492730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.492738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.493071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.493080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.493424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.493432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.493826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.493835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2132718 Killed "${NVMF_APP[@]}" "$@" 00:38:27.755 [2024-10-09 11:18:47.494148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.494157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.494480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.494489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:27.755 [2024-10-09 11:18:47.494793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.494802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.494999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.495008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:27.755 [2024-10-09 11:18:47.495218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.495228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:27.755 [2024-10-09 11:18:47.495433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.495441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:27.755 [2024-10-09 11:18:47.495728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.495737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.755 [2024-10-09 11:18:47.496049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.496058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.496372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.496380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.496699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.496707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.497016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.497025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.497222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.755 [2024-10-09 11:18:47.497231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.755 qpair failed and we were unable to recover it. 00:38:27.755 [2024-10-09 11:18:47.497336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.497345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.497615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.497623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.497900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.497907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.498104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.498112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.498426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.498434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.498754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.498762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.498975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.498983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.499156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.499164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.499480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.499488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.499815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.499823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.500131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.500139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.500328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.500336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.500652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.500659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.500979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.500990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.501303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.501312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.501600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.501609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.501912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.501921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.502288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.502297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.502535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.502544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.502836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.502845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.503151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.503160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2133585 00:38:27.756 [2024-10-09 11:18:47.503486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.503495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2133585 00:38:27.756 [2024-10-09 11:18:47.503779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.503788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:27.756 [2024-10-09 11:18:47.504114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.504123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2133585 ']' 00:38:27.756 [2024-10-09 11:18:47.504332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.504342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.756 [2024-10-09 11:18:47.504558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.504567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:27.756 [2024-10-09 11:18:47.504764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.504773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.756 [2024-10-09 11:18:47.505065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.505075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:27.756 [2024-10-09 11:18:47.505294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.505303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 11:18:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.756 [2024-10-09 11:18:47.505603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.505611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.505785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.505794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.506109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.506118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.506444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.506453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.506660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.506670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.506991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.507000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.507333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.756 [2024-10-09 11:18:47.507342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.756 qpair failed and we were unable to recover it. 00:38:27.756 [2024-10-09 11:18:47.507552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.507561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.507870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.507880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.508178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.508187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.508458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.508471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.508784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.508794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.509214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.509222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.509540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.509550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.509858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.509867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.510176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.510186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.510376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.510385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.510695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.510705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.510863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.510873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.511122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.511133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.511442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.511451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.511770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.511779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.511966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.511976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.512139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.512149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.512475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.512485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.512868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.512877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.513071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.513080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.513407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.513416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.513711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.513719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.513924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.513932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.514253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.514261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.514583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.514591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.514901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.514909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.515221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.515229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.515424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.515433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.515801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.515810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.516093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.516101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.516424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.516433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.516635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.516643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.516914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.516923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.517235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.517243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.517638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.517646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.517972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.757 [2024-10-09 11:18:47.517979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.757 qpair failed and we were unable to recover it. 00:38:27.757 [2024-10-09 11:18:47.518186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.518195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.518450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.518460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.518771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.518780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.519011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.519021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.519088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.519095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.519381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.519389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.519478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.519487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.519724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.519732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.520042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.520050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.520356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.520365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.520666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.520675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.520967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.520975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.521334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.521342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.521667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.521675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.521995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.522003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.522314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.522323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.522615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.522623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.522961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.522971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.523276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.523285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.523593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.523602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.523945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.523953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.524159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.524167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.524351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.524360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.524543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.524554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.524858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.524867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.525046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.525054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.525234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.525243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.525533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.525542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.525889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.525897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.526063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.526072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.526387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.526396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.526544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.526553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.526777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.526786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.527094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.527103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.527256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.527265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.527564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.527572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.527810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.527817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.528043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.528051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.528368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.528377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.528477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.758 [2024-10-09 11:18:47.528486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.758 qpair failed and we were unable to recover it. 00:38:27.758 [2024-10-09 11:18:47.528583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.528591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.528904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.528912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.529232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.529241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.529532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.529544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.529934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.529942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.530139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.530147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.530443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.530451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.530775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.530784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.530960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.530968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.531292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.531300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.531495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.531504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.531829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.531837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.532025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.532033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.532324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.532333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.532544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.532553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.532609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.532617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.532781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.532788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.533107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.533116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.533428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.533438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.533759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.533769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.534063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.534073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.534231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.534239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.534548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.534557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.534775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.534784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.535152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.535161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.535477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.535486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.535706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.535714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.535902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.535910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.536073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.536081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.536383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.536392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.536597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.536605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.536926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.536934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.537294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.537302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.537619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.537627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.537779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.537787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.537968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.537976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.538302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.538310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.538666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.538674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.539010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.539019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.539331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.539341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.759 qpair failed and we were unable to recover it. 00:38:27.759 [2024-10-09 11:18:47.539647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.759 [2024-10-09 11:18:47.539656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.539967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.539975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.540339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.540347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.540568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.540578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.540886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.540895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.541206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.541215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.541555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.541564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.541886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.541895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.542246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.542254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.542427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.542435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.542747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.542755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.543115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.543124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.543317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.543326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.543631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.543639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.543951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.543960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.544275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.544284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.544486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.544495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.544671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.544681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.545056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.545064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.545355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.545362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.545687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.545696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.546040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.546048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.546364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.546372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.546561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.546570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.546759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.546767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.547040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.547047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.547361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.547369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.547703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.547711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.547880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.547888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.548188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.548196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.548364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.548373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.548716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.548725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.548930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.548938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.549268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.549277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.549604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.549613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.549937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.549945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.550229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.550238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.550601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.550609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.550986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.760 [2024-10-09 11:18:47.550995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.760 qpair failed and we were unable to recover it. 00:38:27.760 [2024-10-09 11:18:47.551295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.551302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.551623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.551631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.551945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.551954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.552149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.552158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.552446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.552456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.552798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.552807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.553093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.553101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.553307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.553315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.553634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.553643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.553827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.553834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.554143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.554151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.554477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.554486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.554659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.554668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.554954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.554962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.555139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.555147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.555482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.555490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.555715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.555723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.556036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.556044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.556239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.556247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.556578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.556587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.556913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.556921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.557265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.557273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.557682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.557691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.557897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.557905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.558242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.558250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.558591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.558599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.558913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.558922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.559242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.559251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.559530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.559540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.559866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.559874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.560190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.560200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.560512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.560521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.560836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.560844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.561156] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:38:27.761 [2024-10-09 11:18:47.561172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.561183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.761 [2024-10-09 11:18:47.561202] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.761 [2024-10-09 11:18:47.561498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.761 [2024-10-09 11:18:47.561507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.761 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.561858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.561866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.562174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.562182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.562258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.562265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.562504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.562513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.562707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.562714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.562999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.563008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.563350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.563359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.563684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.563693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.563889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.563898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.564157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.564167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.564478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.564487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.564662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.564671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.564754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.564761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.565060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.565069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.565393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.565402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.565734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.565743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.566055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.566064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.566379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.566388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.566715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.566724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.567068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.567077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.567262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.567271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.567590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.567601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.567831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.567840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.567997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.568006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.568053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.568061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.568373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.568382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.568578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.568588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.568959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.568968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.569283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.569292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.569474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.569483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.569825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.569833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.570157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.570166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.570595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.570604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.570798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.570807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.570968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.570977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.571297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.571305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.571478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.571487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.571782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.571791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.572116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.572125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.572443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.762 [2024-10-09 11:18:47.572452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.762 qpair failed and we were unable to recover it. 00:38:27.762 [2024-10-09 11:18:47.572782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.572791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.572965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.572974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.573288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.573297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.573611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.573619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.573897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.573905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.574213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.574221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.574421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.574430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.574589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.574598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.574941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.574949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.575260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.575268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.575574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.575583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.575916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.575925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.576260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.576269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.576546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.576555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.576880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.576888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.577098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.577107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.577402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.577411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.577748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.577757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.578079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.578088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.578410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.578418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.578598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.578607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.578896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.578905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.579185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.579193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.579516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.579525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.579876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.579884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.580199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.580207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.580496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.580504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.580836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.580846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.581143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.581152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.581332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.581341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.581504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.581514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.581895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.581903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.582100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.582108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.582445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.582453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.582772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.582781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.583099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.583107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.583422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.583431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.583754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.583762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.584093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.584101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.584413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.763 [2024-10-09 11:18:47.584421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.763 qpair failed and we were unable to recover it. 00:38:27.763 [2024-10-09 11:18:47.584752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.584760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.585093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.585101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.585425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.585432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.585737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.585745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.586050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.586058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.586354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.586362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.586662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.586670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.586981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.586990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.587309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.587317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.587489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.587498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.587691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.587698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.587991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.587999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.588320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.588328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.588613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.588620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.588947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.588955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.589146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.589153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.589320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.589328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.589513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.589521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.589844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.589851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.589977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.589985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Read completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 Write completed with error (sct=0, sc=8) 00:38:27.764 starting I/O failed 00:38:27.764 [2024-10-09 11:18:47.591109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:27.764 [2024-10-09 11:18:47.591452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.591516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.591859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.591950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.592391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.592429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.592719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.592729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.593039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.593047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.593340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.593348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.593769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.593799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.594136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.594146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.594490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.594499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.764 qpair failed and we were unable to recover it. 00:38:27.764 [2024-10-09 11:18:47.594921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.764 [2024-10-09 11:18:47.594929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.595266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.595275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.595615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.595624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.595950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.595960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.596290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.596298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.596615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.596623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.596964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.596972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.597309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.597318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.597525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.597533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.597851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.597859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.598192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.598200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.598509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.598518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.598717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.598727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.599036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.599044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.599216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.599225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.599514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.599523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.599861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.599869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.600039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.600046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.600347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.600356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.600677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.600694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.601034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.601042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.601207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.601216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.601509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.601517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.601886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.601894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.602189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.602197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.602503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.602511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.602817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.602825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.603114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.603123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.603298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.603306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.603581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.603589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.603885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.603893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.604173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.604181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.604508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.604517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.604708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.604716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.605053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.605062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.605390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.605399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.765 qpair failed and we were unable to recover it. 00:38:27.765 [2024-10-09 11:18:47.605709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.765 [2024-10-09 11:18:47.605718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.606028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.606036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.606361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.606370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.606681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.606690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.606991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.607000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.607323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.607332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.607633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.607642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.607959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.607968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.608299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.608307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.608494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.608503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.608884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.608892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.609203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.609212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.609538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.609546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.609861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.609869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.610175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.610183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.610438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.610446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.610755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.610766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.611062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.611070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.611386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.611395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.611584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.611593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.612608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.612627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.612940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.612950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.613255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.613264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.613992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.614009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.614331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.614341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.615042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.615057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.615365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.615375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.615657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.615666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.615874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.615883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.616196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.616204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.616513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.616522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.616858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.616866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.617203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.617211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.617527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.617535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.617853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.617861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.618031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.618039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.618348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.618356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.618687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.618695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.618999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.619007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.619160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.619168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.766 [2024-10-09 11:18:47.619477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.766 [2024-10-09 11:18:47.619485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.766 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.619764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.619771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.620078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.620087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.620391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.620400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.620714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.620723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.621027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.621035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.621338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.621347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.621515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.621523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.621794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.621803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.621957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.621966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.622352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.622360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.622567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.622575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.622849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.622857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.623161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.623169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.623494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.623502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.623830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.623838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.624132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.624143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.624314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.624322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.624632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.624640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.624934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.624942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.625269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.625277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.625585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.625595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.625757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.625765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.626054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.626062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.626368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.626376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.626679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.626687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.627001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.627009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.627315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.627323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.627616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.627624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.627932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.627941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.628138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.628146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.628455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.628462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.628773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.628781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.629091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.629100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.629369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.629378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.629679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.629686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.629984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.629993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.630304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.630312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.630610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.630618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.630920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.630930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.767 [2024-10-09 11:18:47.631122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.767 [2024-10-09 11:18:47.631130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.767 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.631440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.631450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.631612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.631621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.631955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.631963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.632292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.632301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.632628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.632636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.632958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.632965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.633273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.633282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.633577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.633586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.633891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.633899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.634092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.634100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.634365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.634372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.634710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.634718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.635017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.635025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.635341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.635349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.635629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.635637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.635948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.635957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.636272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.636281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.636588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.636596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.636862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.636869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.637162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.637170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.637468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.637477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.637761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.637768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.638080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.638089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.638384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.638393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.638707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.638716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.639029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.639037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.639192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.639200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.639563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.639572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.639758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.639765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.640044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.640052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.640366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.640375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.640570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.640578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.640970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.640979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.641289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.641297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.641615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.641624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.641784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.641792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.642068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.642076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.642390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.642398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.642708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.642725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.643011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.768 [2024-10-09 11:18:47.643018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.768 qpair failed and we were unable to recover it. 00:38:27.768 [2024-10-09 11:18:47.643323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.643331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.643628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.643637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.643847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.643855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.644162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.644171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.644485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.644493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.644795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.644803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.645003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.645011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.645292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.645300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.645676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.645684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.646006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.646015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.646334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.646343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.646703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.646711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.646881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.646889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.647245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.647254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.647587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.647595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.647915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.647934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.648208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.648216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.648539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.648548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.648903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.648911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.649228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.649237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.649409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.649417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.649720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.649728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.649990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.649998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.650168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.650176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.650494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.650502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.650774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.650781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.650984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.650994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.651290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.651298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.651619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.651628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.651916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.651923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.652189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.652197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.652487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.652495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.652830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.652838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.653144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.653152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.653475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.653484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.653781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.653790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.654093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.654101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.654410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.654417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.654746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.654754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.655085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.769 [2024-10-09 11:18:47.655093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.769 qpair failed and we were unable to recover it. 00:38:27.769 [2024-10-09 11:18:47.655427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.655435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.655594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.655603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.655900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.655908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.656212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.656220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.656543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.656551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.656859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.656867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.657166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.657174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.657503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.657512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.657819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.657827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.658142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.658151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.658469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.658478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.658669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.658677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.658978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.658986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.659296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.659304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.659624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.659632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.659798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.659807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.660120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.660128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.660411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.660419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.660628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.660637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.660809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.660818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.660921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.660928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.661115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.661123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.661411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.661420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.661742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.661750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.662085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.662093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.662381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.662390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.662693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.662702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.662897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.662905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.663062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.663070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.663397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.663405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.663717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.663724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.664035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.664043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.664345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.664354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.664632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.664640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.664963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.664972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.665290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.770 [2024-10-09 11:18:47.665299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.770 qpair failed and we were unable to recover it. 00:38:27.770 [2024-10-09 11:18:47.665599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.665607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.665915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.665923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.666229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.666237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.666529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.666538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.666867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.666874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.667203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.667213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.667517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.667525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.667848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.667856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.668167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.668175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.668487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.668496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.668803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.668811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.669116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.669125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.669436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.669445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.669534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.669541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.669868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.669877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.670183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.670191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.670397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.670405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.670702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.670710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.670826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.670834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.671109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.671120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.671413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.671422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.671694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.671702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.672018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.672026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.672352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.672361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.672684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.672693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.673017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.673026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.673329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.673338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.673643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.673651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.673962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.673972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.674163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.674171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.674387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.674395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.674588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.674598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.674778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.674787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.675100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.675108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.675381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.675390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.675705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.675722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.676028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.676035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.676365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.676373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.676683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.676691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.676995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.771 [2024-10-09 11:18:47.677002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.771 qpair failed and we were unable to recover it. 00:38:27.771 [2024-10-09 11:18:47.677316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.677325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.677657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.677665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.677967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.677975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.678286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.678295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.678614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.678623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.678945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.678953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.679259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.679268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.679618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.679627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.679788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.679796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.680094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.680103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.680409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.680419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.680578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.680586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.680871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.680878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.681182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.681191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.681487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.681496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.681786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.681794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.682134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.682142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.682177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.682184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.682489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.682497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.682852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.682860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.683027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.683035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.683250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.683258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.683534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.683542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.683868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.683876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.684181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.684189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.684338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.684346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.684628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.684637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.684950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.684959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.685312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.685320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.685628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.685636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.685962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.685970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.686268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.686276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.686593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.686602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.686876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.686884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.687185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.687193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.687500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.687508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.687774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.687782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.688082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.688090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.688397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.688406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.688696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.688704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.772 qpair failed and we were unable to recover it. 00:38:27.772 [2024-10-09 11:18:47.689023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.772 [2024-10-09 11:18:47.689031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.689332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.689341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.689648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.689656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.689957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.689966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.690269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.690277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.690571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.690579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.690898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.690907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.691062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.691071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.691384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.691392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.691688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.691696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.692004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.692013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.692320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.692329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.692628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.692637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.692917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.692925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.693228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.693236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.693535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.693542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.693855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.693863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.694016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.694024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.694292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.694300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.694608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.694617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.694962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.694971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.695272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.695280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.695587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.695595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.695918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.695925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.696233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.696241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.696528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.696536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.696863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.696872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.697183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.697192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.697372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.697380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.697750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.697759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.698065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.698073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.698391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.698400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.698685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.698694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.698999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.699008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.699159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.699169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.699362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.699370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.699519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.699528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.699646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:27.773 [2024-10-09 11:18:47.699824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.699833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.700010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.700019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.700215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.773 [2024-10-09 11:18:47.700224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.773 qpair failed and we were unable to recover it. 00:38:27.773 [2024-10-09 11:18:47.700412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.700420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.700744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.700753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.701019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.701029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.701361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.701370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.701665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.701674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.701989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.701997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.702273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.702282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.702591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.702599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.702913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.702922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.703100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.703108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.703397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.703405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.703698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.703705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.703874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.703883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.704185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.704193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.704499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.704507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.704837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.704844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.705154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.705164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.705451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.705459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.705675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.705684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.705848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.705857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.706163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.706170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.706485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.706494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.706802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.706810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.707010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.707018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.707335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.707343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.707636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.707644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.708017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.708025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.708322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.708330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.708636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.708644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.708927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.708936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.709262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.709270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.709596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.709605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.709922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.709932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.710083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.710091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.710416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.710425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.710755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.774 [2024-10-09 11:18:47.710763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.774 qpair failed and we were unable to recover it. 00:38:27.774 [2024-10-09 11:18:47.711069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.711077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.711389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.711398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.711718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.711727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.712029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.712038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.712345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.712354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.712549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.712558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.712915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.712924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.713198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.713207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.713514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.713522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.713846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.713854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.714169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.714178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.714228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.714235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.714532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.714541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.714848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.714856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.715159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.715167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.715477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.715486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.715675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.715682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.715954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.715962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.716262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.716271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.716603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.716611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.716816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.716824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.717113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.717122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.717430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.717439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.717620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.717629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.717952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.717960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.718234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.718242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.718514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.718522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.718847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.718856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.719198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.719206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.719515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.719524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.719842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.719850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.720163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.720171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.720467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.720476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.720768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.720775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.721049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.721058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.721363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.721372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.721661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.721670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.721960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.721968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.722293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.722301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.722612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.775 [2024-10-09 11:18:47.722620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.775 qpair failed and we were unable to recover it. 00:38:27.775 [2024-10-09 11:18:47.722958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.722966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.723277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.723286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.723595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.723603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.723917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.723925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.724214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.724221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.724519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.724527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.724849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.724857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.725020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.725028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.725337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.725345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.725678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.725686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.725969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.725977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.726279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.726288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.726460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.726472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.726581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.726587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.726844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.726852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.727013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.727021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.727322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.727330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.727655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.727663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.727727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.727735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.728017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.728026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.728207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.728216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.728532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.728540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.728813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.728820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.729129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.729138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.729453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.729461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.729636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.729644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.729941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.729949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.730254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.730262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.730531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.730539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.730851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.730859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.731141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.731148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.731477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.731487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.731786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.731794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.732110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.732119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:27.776 [2024-10-09 11:18:47.732291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.776 [2024-10-09 11:18:47.732300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:27.776 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.732574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.732583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.732856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.732866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.733172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.733179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.733471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.733479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.733785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.733793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.734097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.734114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.734387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.734395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.734689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.734697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.735003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.735011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.735304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.735312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.735517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.735525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.735807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.735815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.736120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.736129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.736434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.736443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.736630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.736639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.736938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.736946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.737241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.737250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.737570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.737578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.737769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.737777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.738102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.738110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.738456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.738469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.738777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.738785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.739095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.739104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.739434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.739443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.739738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.739747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.740094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.740103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.740407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.740416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.740590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.740599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.059 qpair failed and we were unable to recover it. 00:38:28.059 [2024-10-09 11:18:47.740904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.059 [2024-10-09 11:18:47.740912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.741219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.741228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.741535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.741545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.741826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.741834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.742143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.742152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.742341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.742349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.742631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.742639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.742942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.742951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.743259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.743268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.743583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.743591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.743945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.743953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.744239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.744247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.744549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.744558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.744860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.744869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.745182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.745190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.745377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.745385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.745566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.745575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.745890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.745899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.746283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.746292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.746589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.746598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.746932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.746940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.747251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.747260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.747413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.747423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.747581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.747590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.747869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.747877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.748180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.748190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.748494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.748502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.748720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.748727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.748887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.748895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.749212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.749220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.749604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.749613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.749875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.749882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.750100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:28.060 [2024-10-09 11:18:47.750198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.750205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.750522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.750530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.750840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.750847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.751177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.751186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.751498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.751506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.751811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.751819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.752128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.060 [2024-10-09 11:18:47.752137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.060 qpair failed and we were unable to recover it. 00:38:28.060 [2024-10-09 11:18:47.752333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.752342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.752625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.752634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.752945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.752954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.753264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.753272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.753571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.753580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.753718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.753727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.753958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.753966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.754305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.754314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.754627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.754635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.754983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.754992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.755308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.755316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.755602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.755610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.755783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.755792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.756101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.756110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.756398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.756406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.756699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.756707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.757002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.757011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.757328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.757337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.757634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.757642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.757969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.757978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.758292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.758301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.758640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.758648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.758729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.758735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.759064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.759073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.759399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.759408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.759719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.759729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.760078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.760086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.760402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.760413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.760729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.760739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.760824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.760830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.761036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.761044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.761373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.761383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.761692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.761700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.762055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.762064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.762412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.762420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.762737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.762747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.763040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.763050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.763358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.763368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.763590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.763600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.763913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.061 [2024-10-09 11:18:47.763924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.061 qpair failed and we were unable to recover it. 00:38:28.061 [2024-10-09 11:18:47.764101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.764111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.764408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.764418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.764477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.764485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.764571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.764579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.764906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.764917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.765227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.765236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.765409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.765418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.765639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.765649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.765968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.765978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.766265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.766275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.766598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.766607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.766931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.766941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.767253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.767264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.767321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.767329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.767616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.767627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.767949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.767959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.768259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.768256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.062 [2024-10-09 11:18:47.768269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.768282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.062 [2024-10-09 11:18:47.768290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.062 [2024-10-09 11:18:47.768297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.062 [2024-10-09 11:18:47.768303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.062 [2024-10-09 11:18:47.768459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.768472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.768752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.768761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.769069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.769078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.769377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.769385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.769652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.769661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.769827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.769836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.769855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:28.062 [2024-10-09 11:18:47.770006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:28.062 [2024-10-09 11:18:47.770144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.770152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 [2024-10-09 11:18:47.770142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.770241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.770249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.770143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:28.062 [2024-10-09 11:18:47.770529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.770539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.770716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.770725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.771054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.771064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.771254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.771263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.771457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.771470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.771786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.771795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.772121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.772130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.772308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.772317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.772623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.062 [2024-10-09 11:18:47.772632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.062 qpair failed and we were unable to recover it. 00:38:28.062 [2024-10-09 11:18:47.772900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.772908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.773237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.773245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.773411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.773420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.773805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.773813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.774128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.774137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.774444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.774452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.774639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.774647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.774951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.774960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.775162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.775170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.775367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.775376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.775655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.775663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.775981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.775990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.776305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.776314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.776512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.776521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.776691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.776699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.776904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.776911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.777087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.777095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.777463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.777476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.777830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.777839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.778144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.778153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.778341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.778349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.778506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.778513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.778825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.778834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.779153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.779162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.779352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.779360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.779631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.779641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.779818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.779826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.780126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.780133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.780293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.780301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.780572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.780580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.780774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.780783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.781072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.781081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.781296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.781304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.063 qpair failed and we were unable to recover it. 00:38:28.063 [2024-10-09 11:18:47.781502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.063 [2024-10-09 11:18:47.781510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.781820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.781829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.782005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.782014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.782303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.782311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.782505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.782514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.782619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.782627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.782926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.782934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.783136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.783143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.783471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.783480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.783781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.783790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.784075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.784084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.784397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.784406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.784479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.784486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.784603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.784611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.784790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.784798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.785074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.785082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.785295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.785304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.785571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.785580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.785937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.785946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.786276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.786284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.786603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.786612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.786788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.786797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.787119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.787127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.787330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.787338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.787516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.787523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.787597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.787605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.787910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.787919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.788232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.788240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.788435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.788444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.788663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.788673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.788988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.788997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.789181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.789189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.789568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.789576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.789924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.789933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.790101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.790110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.790464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.790477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.790675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.790684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.064 qpair failed and we were unable to recover it. 00:38:28.064 [2024-10-09 11:18:47.790999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.064 [2024-10-09 11:18:47.791009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.791201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.791209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.791380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.791389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.791567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.791575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.791901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.791910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.792097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.792106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.792385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.792393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.792575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.792583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.792958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.792966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.793177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.793185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.793513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.793522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.793857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.793866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.793921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.793930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.794320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.794328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.794620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.794629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.794960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.794968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.795133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.795142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.795331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.795340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.795653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.795662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.795862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.795870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.795910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.795917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.796146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.796154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.796390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.796399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.796579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.796588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.796934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.796942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.797271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.797281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.797596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.797604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.797928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.797936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.798266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.798274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.798489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.798497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.798761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.798770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.798940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.798946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.799264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.799272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.799424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.799433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.799631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.799641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.799958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.799967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.800181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.800189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.800478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.800488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.800703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.800711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.800995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.801003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.065 qpair failed and we were unable to recover it. 00:38:28.065 [2024-10-09 11:18:47.801324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.065 [2024-10-09 11:18:47.801334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.801499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.801507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.801837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.801845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.801898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.801905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.802163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.802172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.802336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.802345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.802643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.802652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.802826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.802834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.803158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.803168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.803270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.803278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.803421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.803430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.803758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.803767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.804074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.804083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.804269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.804279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.804605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.804614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.804805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.804812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.805095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.805103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.805307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.805315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.805683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.805691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.806002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.806011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.806325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.806333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.806593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.806601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.806938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.806947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.807274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.807283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.807598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.807607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.807788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.807795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.808115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.808124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.808279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.808288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.808454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.808463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.808779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.808788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.808973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.808980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.809272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.809281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.809601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.809609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.809777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.809785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.810143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.810151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.810452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.810460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.066 qpair failed and we were unable to recover it. 00:38:28.066 [2024-10-09 11:18:47.810778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.066 [2024-10-09 11:18:47.810786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.811016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.811024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.811318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.811326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.811636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.811646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.811973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.811983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.812300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.812308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.812610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.812619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.812780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.812788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.813115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.813123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.813295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.813303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.813732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.813740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.814030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.814039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.814368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.814377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.814584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.814592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.814917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.814926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.815236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.815246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.815420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.815428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.815876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.815885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.816234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.816243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.816493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.816502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.816736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.816744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.817050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.817059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.817371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.817379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.817720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.817729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.817913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.817921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.817990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.817996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.818306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.818314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.818605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.818613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.818784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.818792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.819102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.819110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.819425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.819434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.819808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.819816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.820129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.820138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.820307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.820316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.820503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.820513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.820802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.820810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.821127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.821135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.821448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.067 [2024-10-09 11:18:47.821457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.067 qpair failed and we were unable to recover it. 00:38:28.067 [2024-10-09 11:18:47.821765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.821773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.822092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.822101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.822306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.822314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.822631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.822640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.822812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.822820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.823129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.823137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.823495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.823505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.823813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.823820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.824128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.824137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.824291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.824299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.824626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.824634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.824883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.824891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.825197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.825204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.825521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.825529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.825848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.825858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.826020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.826028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.826300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.826308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.826648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.826656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.826699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.826706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.826876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.826884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.827046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.827054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.827228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.827237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.827430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.827438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.827598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.827606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.827918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.827926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.828244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.828252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.828291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.828299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.828610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.828618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.828933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.828941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.829252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.829260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.829597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.829606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.829919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.829927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.830233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.830242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.830552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.830560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.068 [2024-10-09 11:18:47.830874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.068 [2024-10-09 11:18:47.830883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.068 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.831038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.831046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.831314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.831322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.831498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.831507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.831826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.831834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.832007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.832014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.832047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.832053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.832254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.832262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.832569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.832577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.832885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.832894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.833066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.833075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.833397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.833405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.833724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.833735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.833946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.833955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.834116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.834125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.834421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.834430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.834742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.834750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.835028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.835044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.835332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.835340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.835717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.835726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.836087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.836096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.836386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.836395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.836699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.836709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.836997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.837005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.837335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.837344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.837548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.837557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.837889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.837897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.838208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.838217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.838380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.838389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.838586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.838594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.838893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.838900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.838942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.838949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.839256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.839264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.839424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.839432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.839744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.839752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.840063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.840072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.840372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.840380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.840701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.840709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.841028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.841036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.841208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.841216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.069 qpair failed and we were unable to recover it. 00:38:28.069 [2024-10-09 11:18:47.841528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.069 [2024-10-09 11:18:47.841537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.841758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.841766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.842064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.842072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.842378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.842388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.842701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.842709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.842918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.842926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.843098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.843106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.843297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.843305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.843697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.843705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.844017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.844026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.844339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.844348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.844631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.844639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.844938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.844949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.845254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.845262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.845530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.845538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.845577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.845584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.845926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.845935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.846115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.846123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.846395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.846403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.846685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.846693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.846864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.846872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.847035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.847042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.847363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.847371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.847409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.847415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.847772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.847780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.848108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.848116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.848424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.848433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.848778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.848786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.849093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.849102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.849267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.849276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.849443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.849452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.849781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.849791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.850105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.850113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.850423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.850432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.850741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.850749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.851063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.851072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.851362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.070 [2024-10-09 11:18:47.851372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.070 qpair failed and we were unable to recover it. 00:38:28.070 [2024-10-09 11:18:47.851680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.851688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.851997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.852006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.852318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.852329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.852579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.852587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.852927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.852935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.853318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.853327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.853514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.853522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.853720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.853729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.854056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.854064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.854373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.854382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.854534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.854542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.854863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.854871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.855246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.855254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.855417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.855425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.855770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.855778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.856094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.856103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.856397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.856406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.856441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.856447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.856755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.856764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.856921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.856928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.857205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.857213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.857511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.857519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.857699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.857706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.857976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.857984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.858290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.858298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.858606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.858615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.858780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.858789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.859086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.859093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.859128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.859136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.859442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.859451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.859619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.859629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.859804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.859812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.860016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.860024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.860340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.860350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.860677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.860685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.860979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.860987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.861320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.861328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.861641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.861649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.071 [2024-10-09 11:18:47.861819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.071 [2024-10-09 11:18:47.861827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.071 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.861982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.861990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.862258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.862267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.862428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.862436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.862613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.862623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.862952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.862961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.863269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.863277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.863574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.863582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.863895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.863904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.864196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.864204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.864418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.864426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.864739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.864748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.865059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.865069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.865353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.865361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.865693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.865701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.866015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.866023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.866190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.866198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.866483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.866492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.866817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.866825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.867133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.867141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.867452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.867461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.867775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.867783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.868095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.868104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.868418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.868427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.868645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.868653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.868824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.868832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.868998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.869006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.869281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.869290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.869510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.869519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.869811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.869820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.870126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.870135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.870437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.870445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.870608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.870616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.072 [2024-10-09 11:18:47.870778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.072 [2024-10-09 11:18:47.870786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.072 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.871128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.871136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.871323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.871330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.871579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.871588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.871689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.871697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.872002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.872010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.872320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.872329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.872486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.872494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.872532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.872538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.872705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.872713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.873032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.873040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.873358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.873367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.873558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.873566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.873848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.873857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.874149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.874157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.874471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.874480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.874826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.874834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.875002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.875009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.875199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.875207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.875536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.875544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.875854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.875863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.876047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.876056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.876348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.876358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.876682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.876690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.876869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.876877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.877133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.877143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.877443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.877452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.877738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.877747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.878058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.878066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.878239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.878248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.878578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.878586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.878908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.878916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.879184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.879193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.879498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.879508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.879808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.879816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.879975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.879982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.880138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.880147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.880469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.880478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.880818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.880827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.881153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.073 [2024-10-09 11:18:47.881161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.073 qpair failed and we were unable to recover it. 00:38:28.073 [2024-10-09 11:18:47.881475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.881483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.881778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.881785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.881954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.881962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.882274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.882282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.882457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.882467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.882683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.882691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.882983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.882992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.883303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.883312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.883628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.883637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.883911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.883918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.884267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.884275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.884454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.884463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.884798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.884807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.884842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.884848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.885117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.885125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.885419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.885429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.885592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.885601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.885904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.885912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.886238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.886247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.886541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.886549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.886866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.886874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.887200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.887209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.887481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.887490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.887864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.887872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.888178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.888186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.888473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.888482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.888804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.888812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.889104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.889112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.889292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.889300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.889467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.889476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.889550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.889558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.889866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.889874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.890178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.890187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.890221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.890228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.890521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.890529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.890857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.890866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.891188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.891196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.891347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.891355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.891553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.891561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.891759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.891767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.074 qpair failed and we were unable to recover it. 00:38:28.074 [2024-10-09 11:18:47.892054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.074 [2024-10-09 11:18:47.892062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.892274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.892282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.892614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.892623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.892929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.892938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.893251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.893259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.893590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.893599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.893944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.893952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.894152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.894160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.894475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.894484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.894640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.894649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.894955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.894963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.895126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.895136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.895416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.895424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.895746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.895754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.895890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.895897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.896112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.896120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.896297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.896305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.896595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.896603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.896964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.896973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.897275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.897284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.897609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.897617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.897800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.897808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.898146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.898154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.898328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.898336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.898670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.898679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.899045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.899054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.899363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.899372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.899678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.899687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.899997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.900005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.900307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.900316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.900618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.900626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.900943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.900952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.901259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.901267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.901644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.901653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.901968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.901975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.902127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.902136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.902452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.902460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.902789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.902797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.903108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.075 [2024-10-09 11:18:47.903118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.075 qpair failed and we were unable to recover it. 00:38:28.075 [2024-10-09 11:18:47.903426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.903435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.903740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.903748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.904077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.904086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.904264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.904273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.904581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.904589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.904896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.904905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.905208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.905216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.905525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.905533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.905750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.905759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.906053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.906061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.906238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.906246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.906540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.906548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.906855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.906865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.907169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.907177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.907351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.907359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.907645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.907653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.907968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.907977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.908275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.908283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.908579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.908587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.908779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.908787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.909067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.909075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.909388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.909396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.909580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.909588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.909873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.909881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.910188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.910197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.910504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.910512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.910833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.910841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.910991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.910999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.911327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.911336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.911629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.911637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.911802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.911810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.912122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.912130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.912438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.912447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.912770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.912779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.913160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.913168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.913475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.913485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.913794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.913802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.914156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.914164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.914316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.914324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.076 [2024-10-09 11:18:47.914541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.076 [2024-10-09 11:18:47.914549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.076 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.914838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.914846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.915037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.915204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.915384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.915676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.915724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.915993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.916001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.916152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.916160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.916441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.916449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.916832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.916841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.917141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.917150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.917469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.917477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.917860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.917870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.918072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.918079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.918403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.918411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.918699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.918707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.918895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.918903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.919066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.919074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.919360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.919369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.919675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.919684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.919835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.919844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.920110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.920117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.920431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.920439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.920652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.920660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.920983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.920992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.921309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.921317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.921488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.921496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.921801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.921809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.921973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.921981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.922251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.922261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.922412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.922420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.922649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.922658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.922887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.922896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.923202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.923210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.077 qpair failed and we were unable to recover it. 00:38:28.077 [2024-10-09 11:18:47.923421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.077 [2024-10-09 11:18:47.923429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.923698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.923708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.923866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.923874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.924131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.924139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.924445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.924453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.924834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.924843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.925159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.925167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.925332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.925341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.925832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.925929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.926343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.926380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.926761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.926854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.927140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.927179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.927681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.927711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.927933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.927942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.928261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.928269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.928596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.928604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.928876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.928884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.929197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.929205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.929369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.929380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.929708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.929717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.929879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.929888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.930241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.930249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.930575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.930583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.930915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.930923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.931106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.931114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.931301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.931309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.931628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.931637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.931954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.931963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.932293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.932301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.932484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.932493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.932795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.932803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.932960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.932967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.933285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.933294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.933474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.933483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.933670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.933678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.933992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.934001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.934163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.934172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.934474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.934483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.934796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.934805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.935093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.078 [2024-10-09 11:18:47.935102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.078 qpair failed and we were unable to recover it. 00:38:28.078 [2024-10-09 11:18:47.935380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.935388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.935717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.935725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.935889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.935897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.936174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.936183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.936492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.936500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.936866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.936874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.937151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.937158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.937488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.937497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.937808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.937817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.938129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.938138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.938446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.938454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.938661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.938669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.938843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.938852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.939131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.939141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.939446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.939454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.939632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.939641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.939921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.939929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.940244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.940252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.940577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.940587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.940907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.940915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.941112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.941120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.941388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.941397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.941710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.941719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.942017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.942025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.942190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.942198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.942498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.942506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.942816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.942824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.943115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.943123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.943476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.943484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.943778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.943787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.944099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.944107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.944413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.944421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.944722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.944730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.945026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.945034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.945198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.945206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.945423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.945431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.945472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.945479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.945629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.079 [2024-10-09 11:18:47.945637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.079 qpair failed and we were unable to recover it. 00:38:28.079 [2024-10-09 11:18:47.946002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.946011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.946316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.946324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.946630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.946638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.946906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.946914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.947186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.947194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.947509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.947517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.947820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.947828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.948034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.948042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.948299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.948308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.948484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.948493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.948668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.948677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.948984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.948992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.949159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.949166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.949480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.949488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.949648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.949655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.949955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.949963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.950266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.950273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.950604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.950612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.950785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.950792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.950958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.950966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.951263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.951274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.951591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.951599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.951931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.951940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.952249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.952257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.952565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.952573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.952908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.952916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.953092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.953099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.953370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.953379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.953536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.953545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.953790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.953798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.954135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.954143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.954316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.954324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.954639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.954648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.954889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.954897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.955230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.955239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.955546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.955554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.955881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.955889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.956218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.956227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.080 [2024-10-09 11:18:47.956552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.080 [2024-10-09 11:18:47.956560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.080 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.956910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.956919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.957230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.957238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.957576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.957585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.957880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.957888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.958235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.958243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.958559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.958566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.958890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.958899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.959246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.959255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.959568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.959576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.959748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.959756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.959945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.959952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.960261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.960270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.960615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.960624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.960922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.960931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.961237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.961245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.961446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.961454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.961731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.961740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.961895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.961903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.962149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.962158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.962326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.962335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.962501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.962510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.962801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.962811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.963132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.963140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.963436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.963444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.963756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.963764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.964146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.964154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.964314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.964322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.964615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.964624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.964939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.964947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.965256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.965264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.965575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.965584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.965898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.965907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.966118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.966126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.966311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.966319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.966482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.081 [2024-10-09 11:18:47.966490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.081 qpair failed and we were unable to recover it. 00:38:28.081 [2024-10-09 11:18:47.966819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.966827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.967131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.967139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.967459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.967470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.967635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.967643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.967811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.967819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.968119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.968126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.968441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.968458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.968787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.968795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.969090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.969098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.969411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.969419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.969717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.969725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.969879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.969888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.970254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.970263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.970416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.970425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.970621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.970629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.970983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.970992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.971160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.971168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.971460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.971472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.971685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.971693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.972003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.972011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.972192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.972201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.972367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.972375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.972701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.972710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.972896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.972905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.973217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.973225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.973562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.973570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.973879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.973888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.974057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.974066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.974365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.974373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.974548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.974556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.974766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.974774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.974812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.974819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.975020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.975029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.975181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.975190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.975478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.975486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.975839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.975847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.976163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.976171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.976542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.976550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.082 [2024-10-09 11:18:47.976820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.082 [2024-10-09 11:18:47.976828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.082 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.977178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.977186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.977495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.977504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.977830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.977838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.978146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.978154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.978471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.978480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.978796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.978804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.979133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.979141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.979461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.979472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.979834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.979842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.980152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.980160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.980341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.980349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.980651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.980659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.980965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.980973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.981145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.981153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.981480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.981488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.981794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.981803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.982075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.982083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.982255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.982263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.982554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.982563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.982875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.982883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.983048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.983056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.983098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.983105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.983403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.983412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.983751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.983759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.984067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.984075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.984229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.984237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.984547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.984555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.984870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.984880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.985187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.985195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.985356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.985363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.985656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.985665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.985874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.985883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.986190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.986199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.986379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.986388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.986648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.986655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.986976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.986985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.987294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.987303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.987604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.987612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.987946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.987955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.083 [2024-10-09 11:18:47.988111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.083 [2024-10-09 11:18:47.988120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.083 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.988294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.988302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.988452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.988461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.988810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.988818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.989118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.989127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.989438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.989446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.989662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.989670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.989847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.989855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.990172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.990180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.990506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.990514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.990690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.990698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.991001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.991009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.991356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.991364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.991656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.991664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.991993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.992001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.992162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.992170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.992506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.992515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.992671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.992680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.992978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.992986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.993220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.993227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.993551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.993559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.993846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.993854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.994012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.994021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.994177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.994186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.994526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.994534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.994869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.994877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.995040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.995049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.995271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.995279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.995588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.995597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.995896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.995904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.996208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.996216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.996369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.996379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.996416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.996423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.996706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.996714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.997045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.997053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.997362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.997371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.997534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.997542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.997810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.997817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.998137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.998146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.998287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.998296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.084 qpair failed and we were unable to recover it. 00:38:28.084 [2024-10-09 11:18:47.998587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.084 [2024-10-09 11:18:47.998595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:47.998779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:47.998787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:47.999173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:47.999182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:47.999494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:47.999502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:47.999812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:47.999819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.000016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.000024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.000319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.000327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.000614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.000622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.000956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.000963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.001264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.001273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.001311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.001318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.001590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.001597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.001909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.001917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.002216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.002224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.002381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.002389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.002713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.002724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.003054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.003062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.003353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.003362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.003549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.003557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.003824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.003832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.003874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.003881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.004047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.004055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.004243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.004251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.004532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.004541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.004725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.004733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.005049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.005057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.005235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.005243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.005562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.005570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.005887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.005894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.006214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.006222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.006564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.006572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.006755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.006763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.007073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.007082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.007404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.007412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.007726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.007735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.007928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.007936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.008231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.008241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.008298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.008306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.085 [2024-10-09 11:18:48.008476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.085 [2024-10-09 11:18:48.008484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.085 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.008524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.008531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.008821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.008829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.008999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.009008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.009160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.009169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.009225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.009233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.009384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.009393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.009700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.009709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.010010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.010019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.010294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.010303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.010633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.010641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.010945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.010954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.011124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.011132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.011446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.011454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.011753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.011762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.011825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.011831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.012113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.012121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.012299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.012309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.012590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.012599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.012752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.012760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.013070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.013078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.013457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.013472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.013637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.013645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.013833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152e260 is same with the state(6) to be set 00:38:28.086 [2024-10-09 11:18:48.014294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.014377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.014796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.014887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.015344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.015381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f0c000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.015558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.015566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.015883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.015890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.016075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.016082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.016248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.086 [2024-10-09 11:18:48.016255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.086 qpair failed and we were unable to recover it. 00:38:28.086 [2024-10-09 11:18:48.016587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.016596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.016782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.016790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.017114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.017123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.017378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.017386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.017681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.017690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.018041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.018049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.018226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.018233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.018642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.018651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.018871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.018878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.019139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.019148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.019485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.019493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.019709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.019716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.020041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.020050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.020374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.020384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.020720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.020729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.020900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.020908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.021204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.021213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.021529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.021537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.021721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.021728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.022082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.022091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.022489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.022498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.022794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.022802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.023101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.023110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.023455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.023463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.023802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.023810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.023996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.024004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.024150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.024158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.024355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.024363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.024552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.024562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.024874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.024882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.025201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.025209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.025521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.025530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.025677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.025684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.025849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.025857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.026164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.026174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.026347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.026355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.026521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.026529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.026848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.026857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.027019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.027027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.027243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.087 [2024-10-09 11:18:48.027251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.087 qpair failed and we were unable to recover it. 00:38:28.087 [2024-10-09 11:18:48.027420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.027428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.027614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.027622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.027810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.027818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.027989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.027997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.028191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.028199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.028512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.028521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.028820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.028828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.028983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.028992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.029318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.029327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.029480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.029488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.029702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.029710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.030020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.030028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.030342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.030349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.030691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.030702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.031018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.031027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.031349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.031357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.031688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.031697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.031879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.031887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.032179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.032187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.032414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.032423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.032556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.032563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.032908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.032915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.033287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.033295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.033611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.033619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.033859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.033867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.034053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.034061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.034369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.034376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.034616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.034625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.034940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.034950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.035259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.035267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.035456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.035468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.035801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.035809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.036929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.036938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.037097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.037105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.037515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.088 [2024-10-09 11:18:48.037523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.088 qpair failed and we were unable to recover it. 00:38:28.088 [2024-10-09 11:18:48.037678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.089 [2024-10-09 11:18:48.037686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.089 qpair failed and we were unable to recover it. 00:38:28.089 [2024-10-09 11:18:48.037996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.089 [2024-10-09 11:18:48.038004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.089 qpair failed and we were unable to recover it. 00:38:28.089 [2024-10-09 11:18:48.038200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.089 [2024-10-09 11:18:48.038207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.089 qpair failed and we were unable to recover it. 00:38:28.089 [2024-10-09 11:18:48.038360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.089 [2024-10-09 11:18:48.038367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.089 qpair failed and we were unable to recover it. 00:38:28.089 [2024-10-09 11:18:48.038753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.089 [2024-10-09 11:18:48.038761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.089 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.039076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.039086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.039405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.039414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.039572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.039580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.039870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.039879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.040285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.040293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.040452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.040460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.040634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.040642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.040856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.040864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.041141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.041152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.041210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.041217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.041504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.041513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.041725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.041733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.041956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.041963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.042273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.042280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.042588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.042596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.042899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.042909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.043217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.043226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.043412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.043420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.043723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.043732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.044001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.044010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.044198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.044207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.044514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.044522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.044866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.044874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.045204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.045213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.045553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.045561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.045738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.045745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.045924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.045931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.046246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.046255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.046408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.046417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.046736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.046745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.046907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.046916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.047138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.047146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.047454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.047463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.047637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.047646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.047933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.047942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.048123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.048132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.048303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.368 [2024-10-09 11:18:48.048312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.368 qpair failed and we were unable to recover it. 00:38:28.368 [2024-10-09 11:18:48.048643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.048651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.048976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.048985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.049162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.049171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.049562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.049569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.049739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.049746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.050027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.050035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.050219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.050228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.050523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.050530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.050842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.050850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.051016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.051025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.051186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.051195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.051536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.051546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.051727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.051736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.051915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.051923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.052221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.052229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.052558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.052567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.052740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.052749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.053013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.053021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.053275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.053283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.053467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.053476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.053774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.053783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.054088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.054096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.054254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.054263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.054345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.054353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.054688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.054696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.054965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.054973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.055148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.055156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.055538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.055547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.055864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.055872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.056174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.056182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.056365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.056372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.056663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.056671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.056988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.056996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.057164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.057172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.057383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.057391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.057677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.057685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.057876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.057884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.058158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.058166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.058480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.058489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.369 [2024-10-09 11:18:48.058809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.369 [2024-10-09 11:18:48.058818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.369 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.059132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.059139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.059458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.059474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.059796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.059805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.060141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.060149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.060315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.060325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.060631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.060639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.060944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.060952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.061120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.061128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.061254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.061261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.061572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.061580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.061778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.061786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.062000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.062008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.062294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.062302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.062622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.062630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.062956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.062965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.063277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.063285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.063600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.063609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.063944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.063953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.064271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.064279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.064599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.064608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.064903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.064911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.065064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.065071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.065343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.065353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.065678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.065686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.065726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.065732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.065906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.065913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.066216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.066226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.066557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.066565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.066707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.066714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.067008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.067017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.067308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.067316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.067471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.067479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.067686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.067694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.067994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.068003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.068042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.068049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.068348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.068356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.068678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.068686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.069017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.069025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.069334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.069345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.069684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.370 [2024-10-09 11:18:48.069692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.370 qpair failed and we were unable to recover it. 00:38:28.370 [2024-10-09 11:18:48.069850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.069857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.070176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.070184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.070518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.070526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.070700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.070709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.070871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.070880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.071155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.071164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.071457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.071469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.071768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.071776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.072012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.072019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.072354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.072362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.072629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.072636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.072817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.072826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.073119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.073127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.073405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.073412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.073713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.073721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.074820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.074828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.075154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.075163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.075469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.075478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.075689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.075698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.076012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.076021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.076312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.076319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.076652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.076661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.076824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.076834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.077135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.077144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.077476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.077484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.077821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.077829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.078019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.078027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.078216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.078223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.078528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.078537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.078708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.078717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.079006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.079015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.079302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.079310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.371 [2024-10-09 11:18:48.079618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.371 [2024-10-09 11:18:48.079626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.371 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.079809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.079820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.080087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.080095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.080410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.080418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.080704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.080713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.080874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.080883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.081147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.081154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.081485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.081494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.081673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.081682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.081726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.081732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.082026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.082034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.082192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.082200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.082383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.082392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.082570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.082579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.082765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.082775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.083132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.083140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.083302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.083310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.083629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.083638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.083946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.083953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.084131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.084140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.084459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.084471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.084765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.084772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.085076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.085084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.085378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.085387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.085540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.085549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.085921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.085929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.086263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.086270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.086575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.086584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.086774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.086783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.087096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.087105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.087412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.087420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.087759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.087767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.087881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.087888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.087963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.087971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.088178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.372 [2024-10-09 11:18:48.088186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.372 qpair failed and we were unable to recover it. 00:38:28.372 [2024-10-09 11:18:48.088513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.088522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.088859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.088867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.089165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.089173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.089509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.089517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.089865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.089873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.090068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.090075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.090272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.090282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.090471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.090480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.090748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.090757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.090969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.090977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.091254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.091262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.091576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.091585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.091916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.091924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.092134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.092141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.092301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.092308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.092574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.092583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.092901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.092909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.092966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.092972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.093263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.093271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.093435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.093443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.093736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.093744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.094071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.094078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.094394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.094402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.094722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.094730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.094893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.094901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.095219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.095227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.095605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.095614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.095915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.095923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.096089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.096097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.096283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.096291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.096572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.096581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.096877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.096884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.097206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.097214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.097531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.097539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.097867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.097875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.098243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.098250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.098412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.098419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.098742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.098751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.099057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.099066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.099221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.099229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.373 qpair failed and we were unable to recover it. 00:38:28.373 [2024-10-09 11:18:48.099531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.373 [2024-10-09 11:18:48.099540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.099873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.099882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.100176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.100183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.100384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.100391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.100677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.100685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.100996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.101003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.101218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.101227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.101405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.101412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.101695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.101703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.102028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.102036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.102192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.102202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.102469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.102478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.102783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.102792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.102969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.102977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.103282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.103290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.103612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.103619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.103804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.103812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.103971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.103979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.104241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.104251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.104572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.104580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.104761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.104769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.105064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.105072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.105235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.105243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.105280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.105287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.105606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.105614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.105940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.105948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.106274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.106283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.106583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.106592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.106937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.106945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.107275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.107283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.107683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.107692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.107858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.107867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.107937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.107945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.108281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.108289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.108625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.108633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.108860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.108868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.109174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.109182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.109499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.109507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.109812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.109820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.109972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.109981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.110244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.374 [2024-10-09 11:18:48.110253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.374 qpair failed and we were unable to recover it. 00:38:28.374 [2024-10-09 11:18:48.110562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.110570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.110879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.111073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.111081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.111390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.111397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.111768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.111775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.112097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.112107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.112290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.112298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.112335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.112341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.112650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.112658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.112980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.112988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.113301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.113309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.113660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.113668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.113848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.113856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.114207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.114214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.114397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.114404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.114712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.114720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.115067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.115076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.115261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.115269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.115538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.115546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.115854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.115862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.116158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.116166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.116482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.116490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.116769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.116777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.117108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.117116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.117290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.117298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.117601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.117609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.117927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.117936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.118246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.118253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.118567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.118575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.118739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.118748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.118900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.118908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.119087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.119095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.119417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.119426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.119724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.119732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.120021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.120029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.120181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.120190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.120409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.120417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.120729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.120738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.121028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.121037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.121391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.375 [2024-10-09 11:18:48.121400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.375 qpair failed and we were unable to recover it. 00:38:28.375 [2024-10-09 11:18:48.121561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.121570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.121752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.121761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.122025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.122034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.122340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.122349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.122533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.122541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.122687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.122697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.123001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.123009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.123206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.123213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.123562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.123570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.123892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.123901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.123935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.123942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.124242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.124250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.124565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.124573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.124644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.124651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.124918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.124926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.125267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.125274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.125459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.125477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.125826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.125835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.126133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.126142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.126324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.126332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.126513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.126522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.126810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.126818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.127122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.127130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.127302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.127310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.127471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.127478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.127761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.127769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.128064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.128072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.128348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.128356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.128546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.128555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.128729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.128737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.129054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.129063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.129367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.129375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.129539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.129546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.129700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.129707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.129978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.376 [2024-10-09 11:18:48.129986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.376 qpair failed and we were unable to recover it. 00:38:28.376 [2024-10-09 11:18:48.130281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.130289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.130596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.130604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.130918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.130926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.131221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.131228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.131542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.131550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.131751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.131759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.132075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.132083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.132391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.132398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.132436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.132443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.132557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.132565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.132902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.132911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.133216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.133223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.133376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.133385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.133720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.133728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.134002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.134010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.134169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.134178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.134487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.134496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.134869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.134877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.135192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.135199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.135378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.135386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.135567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.135575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.135790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.135799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.135954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.135962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.136278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.136286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.136673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.136681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.137012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.137021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.137185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.137192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.137504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.137512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.137808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.137815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.138127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.138134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.138172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.138180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.138474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.138483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.138771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.138779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.139096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.139104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.139411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.139418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.139729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.139736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.139925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.139932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.140241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.140249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.140456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.140464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.140759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.140767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.377 [2024-10-09 11:18:48.140929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.377 [2024-10-09 11:18:48.140937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.377 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.141234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.141243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.141414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.141421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.141705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.141714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.142021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.142029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.142201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.142208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.142476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.142486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.142640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.142648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.142955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.142962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.143268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.143277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.143483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.143493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.143814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.143822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.144000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.144008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.144076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.144083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.144378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.144386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.144704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.144713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.145019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.145026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.145354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.145362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.145690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.145698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.146017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.146025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.146355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.146364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.146656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.146664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.146833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.146841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.147000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.147008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.147358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.147366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.147688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.147696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.147889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.147897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.148163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.148171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.148478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.148486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.148773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.148780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.149081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.149088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.149399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.149406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.149712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.149720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.150034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.150042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.150343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.150351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.150632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.150641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.150950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.150958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.151156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.151165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.151473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.151481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.151791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.151799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.378 [2024-10-09 11:18:48.152111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.378 [2024-10-09 11:18:48.152120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.378 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.152291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.152299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.152613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.152621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.152922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.152930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.153259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.153266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.153582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.153590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.153780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.153788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.154092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.154101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.154431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.154439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.154752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.154760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.155102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.155112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.155410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.155418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.155787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.155796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.156087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.156095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.156385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.156393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.156719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.156728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.157054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.157062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.157238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.157246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.157440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.157447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.157680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.157688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.157991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.157998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.158178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.158186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.158228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.158235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.158554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.158562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.158851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.158859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.159033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.159040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.159339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.159347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.159543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.159551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.159729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.159737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.160036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.160045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.160371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.160380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.160690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.160698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.161005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.161013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.161151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.161159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.161500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.161509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.161670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.161678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.161862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.161871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.162180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.162188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.162359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.162367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.162642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.162651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.379 [2024-10-09 11:18:48.162815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.379 [2024-10-09 11:18:48.162823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.379 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.163130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.163138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.163444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.163452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.163496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.163503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.163647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.163655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.163826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.163834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.164146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.164153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.164486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.164493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.164835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.164843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.165048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.165056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.165393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.165403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.165738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.165748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.166960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.166967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.167266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.167274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.167597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.167605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.167962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.167970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.168163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.168171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.168352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.168360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.168661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.168669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.168992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.169000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.169320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.169329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.169487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.169496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.380 qpair failed and we were unable to recover it. 00:38:28.380 [2024-10-09 11:18:48.169848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.380 [2024-10-09 11:18:48.169856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.170273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.170281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.170565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.170572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.170746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.170755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.171078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.171086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.171383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.171391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.171662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.171672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.171953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.171960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.172291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.172299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.172612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.172619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.172991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.172999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.173281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.173294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.173605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.173612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.173823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.381 [2024-10-09 11:18:48.173830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.381 qpair failed and we were unable to recover it. 00:38:28.381 [2024-10-09 11:18:48.174131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.174139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.174473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.174480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.174793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.174801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.175090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.175106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.175434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.175442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.175614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.175622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.175814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.175820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.176201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.176208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.176396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.176403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.176785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.176794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.177037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.177044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.177359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.177365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.177402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.177409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.177765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.177772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.382 qpair failed and we were unable to recover it. 00:38:28.382 [2024-10-09 11:18:48.178088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.382 [2024-10-09 11:18:48.178094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.178394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.178401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.178707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.178714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.179030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.179037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.179211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.179218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.179526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.179533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.179705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.179712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.179891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.179898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.180201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.180208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.180396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.180403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.180722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.180729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.181037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.383 [2024-10-09 11:18:48.181043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.383 qpair failed and we were unable to recover it. 00:38:28.383 [2024-10-09 11:18:48.181318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.181324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.181644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.181651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.181961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.181969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.182282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.182289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.182590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.182598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.182901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.182908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.183196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.183204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.183532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.183539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.183690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.183696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.183890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.183897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.184101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.184108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.384 qpair failed and we were unable to recover it. 00:38:28.384 [2024-10-09 11:18:48.184371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.384 [2024-10-09 11:18:48.184378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.184679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.184687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.184880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.184888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.185164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.185171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.185539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.185545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.185862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.185870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.186181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.186187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.186505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.186512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.186847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.186854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.187169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.187176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.187349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.187356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.187537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.187544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.187731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.187741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.188059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.188066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.188373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.385 [2024-10-09 11:18:48.188380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.385 qpair failed and we were unable to recover it. 00:38:28.385 [2024-10-09 11:18:48.188713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.188720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.188912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.188919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.189075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.189081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.189256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.189263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.189578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.189585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.189884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.189892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.190203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.190209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.190499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.190506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.190825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.190832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.191144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.191151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.191359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.191367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.386 qpair failed and we were unable to recover it. 00:38:28.386 [2024-10-09 11:18:48.191648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.386 [2024-10-09 11:18:48.191655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.191949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.191957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.192271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.192278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.192604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.192611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.192981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.192988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.193299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.193306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.193496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.193504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.193909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.193916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.194204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.194212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.194526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.194533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.194692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.194699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.194856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.194863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.195162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.195169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.195368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.195375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.387 qpair failed and we were unable to recover it. 00:38:28.387 [2024-10-09 11:18:48.195534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.387 [2024-10-09 11:18:48.195541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.195847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.195854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.196192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.196199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.196380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.196387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.196737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.196745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.196880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.196888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.197082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.197089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.197426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.197433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.197647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.197654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.198018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.198025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.198367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.198373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.198535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.388 [2024-10-09 11:18:48.198542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.388 qpair failed and we were unable to recover it. 00:38:28.388 [2024-10-09 11:18:48.198698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.198705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.199004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.199012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.199322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.199330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.199488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.199496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.199810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.199818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.200132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.200138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.200294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.200302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.200488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.200495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.200791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.200798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.389 [2024-10-09 11:18:48.201131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.389 [2024-10-09 11:18:48.201139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.389 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.201433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.201440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.201745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.201753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.202068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.202075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.202404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.202410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.202692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.202700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.203010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.203017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.203174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.203181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.203410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.203417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.203582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.203589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.203882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.390 [2024-10-09 11:18:48.203889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.390 qpair failed and we were unable to recover it. 00:38:28.390 [2024-10-09 11:18:48.204054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.204061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.204333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.204340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.204530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.204537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.204690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.204696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.204872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.204879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.205192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.205198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.205501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.205508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.205811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.205820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.205978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.205985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.206208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.206216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.206421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.206428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.206586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.206593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.391 [2024-10-09 11:18:48.206877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.391 [2024-10-09 11:18:48.206884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.391 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.207197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.207204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.207493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.207500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.207849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.207856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.208015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.208022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.208296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.208303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.208611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.208618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.208915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.208923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.209226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.209233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.392 [2024-10-09 11:18:48.209540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.392 [2024-10-09 11:18:48.209547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.392 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.209707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.209714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.209777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.209783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.210081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.210088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.210396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.210403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.210782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.210789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.210966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.210972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.211127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.211135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.211436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.211442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.211733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.393 [2024-10-09 11:18:48.211740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.393 qpair failed and we were unable to recover it. 00:38:28.393 [2024-10-09 11:18:48.211910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.211917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.212070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.212076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.212239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.212245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.212542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.212549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.212863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.212870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.213186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.213192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.213523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.213530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.213564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.213570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.213873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.213880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.214218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.214225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.214561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.214568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.214886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.394 [2024-10-09 11:18:48.214893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.394 qpair failed and we were unable to recover it. 00:38:28.394 [2024-10-09 11:18:48.215211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.215217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.215512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.215519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.215847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.215854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.216168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.216175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.216470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.216480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.216517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.216524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.216678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.216685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.217072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.217079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.217399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.217405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.217533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.217547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.217837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.217844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.218253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.218259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.218566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.395 [2024-10-09 11:18:48.218573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.395 qpair failed and we were unable to recover it. 00:38:28.395 [2024-10-09 11:18:48.218750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.396 [2024-10-09 11:18:48.218757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.396 qpair failed and we were unable to recover it. 00:38:28.396 [2024-10-09 11:18:48.218915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.396 [2024-10-09 11:18:48.218923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.396 qpair failed and we were unable to recover it. 00:38:28.396 [2024-10-09 11:18:48.219134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.396 [2024-10-09 11:18:48.219141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.396 qpair failed and we were unable to recover it. 00:38:28.396 [2024-10-09 11:18:48.219469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.396 [2024-10-09 11:18:48.219476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.396 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.219637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.219645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.219993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.220000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.220287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.220294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.220624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.220631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.220945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.220952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.221126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.221132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.221472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.221479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.221790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.221797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.222110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.222117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.397 [2024-10-09 11:18:48.222422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.397 [2024-10-09 11:18:48.222429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.397 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.222657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.222664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.222856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.222863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.223141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.223148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.223456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.223462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.223660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.223667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.223877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.223884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.224240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.224246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.224583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.224591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.224919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.224926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.225073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.225080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.225288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.225294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.225612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.225619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.225943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.225950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.226127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.226134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.226289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.226297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.226592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.226599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.226903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.226910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.227177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.227186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.227499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.227506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.227706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.227713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.227898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.227905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.228184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.228191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.228367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.228374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.228562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.228569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.228810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.228816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.229153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.229159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.398 qpair failed and we were unable to recover it. 00:38:28.398 [2024-10-09 11:18:48.229449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.398 [2024-10-09 11:18:48.229456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.229642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.229650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.229958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.229966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.230275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.230282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.230604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.230611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.230943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.230950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.231115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.231122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.231451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.231458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.231650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.231657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.231825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.231832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.231987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.231995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.232300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.232307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.232545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.232552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.232921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.232929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.233103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.233111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.233302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.233309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.233559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.233566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.233890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.233898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.234076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.234083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.234360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.234368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.234679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.234686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.234835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.234841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.235149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.235157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.235490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.235498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.235661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.235667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.235945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.235952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.236135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.236142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.236322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.236328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.236538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.236545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.236926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.236933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.399 qpair failed and we were unable to recover it. 00:38:28.399 [2024-10-09 11:18:48.237227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.399 [2024-10-09 11:18:48.237234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.237404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.237413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.237445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.237451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.237738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.237744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.238084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.238091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.238271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.238278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.238579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.238586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.238769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.238776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.239966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.239973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.240138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.240145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.240533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.240540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.240847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.240854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.241178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.241185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.241483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.241491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.241793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.241800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.242114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.242121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.242425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.242432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.242808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.242816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.243144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.243151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.243470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.243477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.243681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.243688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.243973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.243980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.244151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.244158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.244471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.400 [2024-10-09 11:18:48.244479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.400 qpair failed and we were unable to recover it. 00:38:28.400 [2024-10-09 11:18:48.244757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.244764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.244937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.244945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.245302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.245309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.245614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.245622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.245942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.245949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.246262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.246269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.246548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.246555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.246867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.246875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.247188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.247195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.247358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.247365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.247784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.247792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.248106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.248113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.248295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.248305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.248600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.248607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.248910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.248923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.249108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.249115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.249392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.249398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.249683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.249696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.250022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.250029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.250418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.250425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.250771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.250778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.250940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.250946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.251254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.251261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.251638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.251645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.251830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.251837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.251996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.252003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.252288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.252296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.252581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.252589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.252897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.252904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.252945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.252952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.253260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.253267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.253600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.253607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.401 [2024-10-09 11:18:48.253914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.401 [2024-10-09 11:18:48.253920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.401 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.254201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.254208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.254505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.254512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.254693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.254700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.255002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.255009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.255348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.255355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.255774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.255782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.256095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.256102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.256219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.256225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.256531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.256539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.256977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.256984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.257315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.257323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.257516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.257523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.257846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.257853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.258186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.258193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.258366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.258372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.258682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.258688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.259005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.259011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.259193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.259199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.259400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.259406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.259716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.259725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.260047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.260054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.260244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.260251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.260594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.260602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.260785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.260792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.261013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.261020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.261088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.261094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.261280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.261286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.261455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.261463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.402 [2024-10-09 11:18:48.261652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.402 [2024-10-09 11:18:48.261659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.402 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.261812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.261819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.262129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.262142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.262317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.262324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.262512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.262520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.262786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.262793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.263132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.263139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.263434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.263442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.263750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.263757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.264049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.264056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.264372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.264379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.264700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.264707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.403 qpair failed and we were unable to recover it. 00:38:28.403 [2024-10-09 11:18:48.264742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.403 [2024-10-09 11:18:48.264749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.404 qpair failed and we were unable to recover it. 00:38:28.404 [2024-10-09 11:18:48.265057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.404 [2024-10-09 11:18:48.265064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.404 qpair failed and we were unable to recover it. 00:38:28.404 [2024-10-09 11:18:48.265362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.404 [2024-10-09 11:18:48.265369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.404 qpair failed and we were unable to recover it. 00:38:28.404 [2024-10-09 11:18:48.265543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.404 [2024-10-09 11:18:48.265551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.404 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.265754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.265760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.265950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.265957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.266245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.266252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.266435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.266442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.266758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.266766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.266945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.266954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.267263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.267271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.267479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.267487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.267548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.267556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.405 qpair failed and we were unable to recover it. 00:38:28.405 [2024-10-09 11:18:48.267866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.405 [2024-10-09 11:18:48.267873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.268187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.268194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.268523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.268530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.268853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.268860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.406 [2024-10-09 11:18:48.269773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.406 qpair failed and we were unable to recover it. 00:38:28.406 [2024-10-09 11:18:48.269943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.269950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.270267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.270274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.270436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.270444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.270824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.270831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.271032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.271039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.271394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.271401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.271698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.271706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.272014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.272021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.272318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.272326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.272505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.272513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.407 [2024-10-09 11:18:48.272678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.407 [2024-10-09 11:18:48.272685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.407 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.272971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.272979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.273306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.273312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.273473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.273480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.273697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.273704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.273959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.273966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.274177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.274184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.274511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.274518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.274706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.274714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.274914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.274922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.275211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.275219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.275540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.275547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.275722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.408 [2024-10-09 11:18:48.275729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.408 qpair failed and we were unable to recover it. 00:38:28.408 [2024-10-09 11:18:48.276093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.276101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.276416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.276424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.276747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.276754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.277069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.277077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.277420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.277427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.277748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.277756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.277951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.277959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.278027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.278034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.278086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.278092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.278265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.278272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.278428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.409 [2024-10-09 11:18:48.278435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.409 qpair failed and we were unable to recover it. 00:38:28.409 [2024-10-09 11:18:48.278748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.278756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.279075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.279083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.279266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.279275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.279574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.279581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.279749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.279756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.279951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.279958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.280298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.280305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.280476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.280484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.280766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.280773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.281118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.281125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.281426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.281433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.281611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.281618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.281878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.281885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.410 [2024-10-09 11:18:48.282189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.410 [2024-10-09 11:18:48.282196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.410 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.282232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.282239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.282410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.282418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.282604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.282611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.282785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.282792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.283070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.283078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.283370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.283377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.283669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.411 [2024-10-09 11:18:48.283677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.411 qpair failed and we were unable to recover it. 00:38:28.411 [2024-10-09 11:18:48.284004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.284017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.412 [2024-10-09 11:18:48.284346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.284353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.412 [2024-10-09 11:18:48.284709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.284716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.412 [2024-10-09 11:18:48.284993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.285000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.412 [2024-10-09 11:18:48.285167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.285174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.412 [2024-10-09 11:18:48.285498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.412 [2024-10-09 11:18:48.285505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.412 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.285812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.285820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.286117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.286124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.286418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.286427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.286584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.286591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.286762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.286769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.287099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.287106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.287409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.287416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.287621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.287629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.287817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.287824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.288040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.288047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.288200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.288209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.288360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.288368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.288679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.288687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.289003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.289011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.289165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.289173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.289485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.289496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.289799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.289806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.290124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.290132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.290447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.290454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.290654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.290662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.290830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.290838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.291146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.291153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.291348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.291356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.413 [2024-10-09 11:18:48.291630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.413 [2024-10-09 11:18:48.291638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.413 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.291860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.291867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.292034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.292041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.292323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.292330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.292631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.292638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.292942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.292949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.414 [2024-10-09 11:18:48.293353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.414 [2024-10-09 11:18:48.293360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.414 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.293672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.293680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.293876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.293883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.294051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.294058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.294337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.294350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.294525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.294532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.294848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.294855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.295034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.295042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.415 qpair failed and we were unable to recover it. 00:38:28.415 [2024-10-09 11:18:48.295346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.415 [2024-10-09 11:18:48.295353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.295630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.295638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.295932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.295939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.296312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.296320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.296630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.296638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.296821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.296828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.297157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.297164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.297371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.297379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.297657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.297665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.297985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.297992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.416 qpair failed and we were unable to recover it. 00:38:28.416 [2024-10-09 11:18:48.298275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.416 [2024-10-09 11:18:48.298282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.298679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.298686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.298906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.298913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.299094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.299109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.299469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.299476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.299656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.299663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.299944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.299951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.300130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.300137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.300513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.300522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.300829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.300836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.301151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.301159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.301326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.417 [2024-10-09 11:18:48.301334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.417 qpair failed and we were unable to recover it. 00:38:28.417 [2024-10-09 11:18:48.301505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.301512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.301888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.301896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.302220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.302226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.302427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.302434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.302745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.302752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.303064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.303071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.303396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.303403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.303737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.303745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.304062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.304070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.304253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.304261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.304428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.418 [2024-10-09 11:18:48.304435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.418 qpair failed and we were unable to recover it. 00:38:28.418 [2024-10-09 11:18:48.304752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.304759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.304940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.304948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.305258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.305266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.305577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.305585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.305767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.305774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.305938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.305946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.306125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.306132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.306496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.306503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.306789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.306804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.307137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.307145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.419 [2024-10-09 11:18:48.307318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.419 [2024-10-09 11:18:48.307325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.419 qpair failed and we were unable to recover it. 00:38:28.420 [2024-10-09 11:18:48.307616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.420 [2024-10-09 11:18:48.307624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.420 qpair failed and we were unable to recover it. 00:38:28.420 [2024-10-09 11:18:48.307858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.420 [2024-10-09 11:18:48.307865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.420 qpair failed and we were unable to recover it. 00:38:28.420 [2024-10-09 11:18:48.308239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.420 [2024-10-09 11:18:48.308247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.420 qpair failed and we were unable to recover it. 00:38:28.420 [2024-10-09 11:18:48.308412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.420 [2024-10-09 11:18:48.308419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.420 qpair failed and we were unable to recover it. 00:38:28.420 [2024-10-09 11:18:48.308651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.308659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.308851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.308858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.309012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.309020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.309355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.309362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.309680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.309687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.309731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.309737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.310032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.421 [2024-10-09 11:18:48.310039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.421 qpair failed and we were unable to recover it. 00:38:28.421 [2024-10-09 11:18:48.310219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.310227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.310533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.310542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.310721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.310729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.311026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.311035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.311342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.311349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.311533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.311541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.311810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.311824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.311979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.311986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.312272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.312279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.312415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.312422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.312696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.312704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.422 [2024-10-09 11:18:48.313030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.422 [2024-10-09 11:18:48.313037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.422 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.313258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.313265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.313591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.313598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.313775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.313783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.313995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.314002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.314151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.314159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.314471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.314479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.314824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.314831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.315006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.315013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.315174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.315181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.315250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.315258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.423 qpair failed and we were unable to recover it. 00:38:28.423 [2024-10-09 11:18:48.315399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.423 [2024-10-09 11:18:48.315406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.315577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.315585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.315794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.315801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.316180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.316188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.316390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.316397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.316705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.316713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.316748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.316754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.317116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.317124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.424 qpair failed and we were unable to recover it. 00:38:28.424 [2024-10-09 11:18:48.317280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.424 [2024-10-09 11:18:48.317288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.317488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.317495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.317800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.317808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.318126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.318133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.318448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.318455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.318679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.318686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.318858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.318865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.319276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.319285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.319471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.319479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.319652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.319659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.319979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.319987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.320188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.320195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.320347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.320355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.425 qpair failed and we were unable to recover it. 00:38:28.425 [2024-10-09 11:18:48.320696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.425 [2024-10-09 11:18:48.320706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.320960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.320967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.321287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.321293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.321480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.321487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.321783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.321797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.322105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.322112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.322306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.322313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.322774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.322780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.322977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.322984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.323302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.323308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.323346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.323352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.323618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.323625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.323936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.323942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.324258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.324266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.324578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.324586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.324751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.324758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.325058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.325065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.325223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.325230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.325614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.325621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.325935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.325942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.326227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.326234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.326410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.326416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.326622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.326629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.326928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.326934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.327138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.327145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.327539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.327545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.327625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.327631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.327667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.327674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.327980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.327987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.328154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.328161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.328330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.328337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.328738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.328745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.328939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.426 [2024-10-09 11:18:48.328945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.426 qpair failed and we were unable to recover it. 00:38:28.426 [2024-10-09 11:18:48.329005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.329011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.329301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.329308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.329503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.329510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.329907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.329914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.330233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.330240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.330560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.330567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.330742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.330748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.330920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.330929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.330972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.330979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.331286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.331293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.331602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.331609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.331968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.331974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.332301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.332309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.332629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.332635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.332800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.332808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.333022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.333029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.333183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.333191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.333378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.333384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.333673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.333680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.334003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.334010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.334169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.334176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.334550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.334558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.334895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.334902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.335198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.335205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.335500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.335507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.335832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.335838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.336169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.336176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.336531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.336538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.336816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.336822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.336892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.336898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.337230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.337237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.337536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.337543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.337714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.337721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.338051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.338058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.338222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.338231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.338686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.338693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.338982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.338989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.339307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.339313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.339607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.339614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.339983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.339990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.340169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.340176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.340366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.340373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.427 [2024-10-09 11:18:48.340680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.427 [2024-10-09 11:18:48.340687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.427 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.341033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.341040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.341352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.341359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.341682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.341689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.341873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.341879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.342034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.342041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.342217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.342224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.342506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.342514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.342691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.342698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.342865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.342872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.343121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.343128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.343442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.343449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.343836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.343843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.343901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.343907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.344263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.344270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.344439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.344446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.344733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.344740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.345034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.345041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.345346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.345353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.345518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.345525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.345765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.345772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.346131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.346139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.346448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.346455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.346636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.346644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.346959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.346966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.347123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.347130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.347404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.347412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.347622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.347630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.347935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.347942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.348253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.348260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.348440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.348447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.348634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.348640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.348913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.348922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.349089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.349096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.349316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.349322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.349508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.349515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.349883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.349889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.350070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.350077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.428 [2024-10-09 11:18:48.350139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.428 [2024-10-09 11:18:48.350145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.428 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.350423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.350431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.350745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.350754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.350950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.350957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.351287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.351295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.351456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.351468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.351747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.351755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.352069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.352077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.352273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.352280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.352475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.352483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.352670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.352677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.352882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.352888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.696 qpair failed and we were unable to recover it. 00:38:28.696 [2024-10-09 11:18:48.353072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.696 [2024-10-09 11:18:48.353079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.353436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.353442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.353754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.353761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.353960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.353967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.354275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.354282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.354594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.354602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.354892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.354899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.355016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.355023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.355113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.355119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.355410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.355417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.355718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.355725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.355889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.355895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.356170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.356178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.356372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.356379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.356528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.356535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.356802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.356809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.356977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.356983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.357025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.357032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.357315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.357322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.357635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.357641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:28.697 [2024-10-09 11:18:48.357965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.357972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:38:28.697 [2024-10-09 11:18:48.358287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.358296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:28.697 [2024-10-09 11:18:48.358606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.358614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.697 [2024-10-09 11:18:48.358921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.358928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.358969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.358976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.697 [2024-10-09 11:18:48.359250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.359257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.359570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.359578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.359778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.359785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.359973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.359982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.360280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.360288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.360599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.360607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.360908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.360916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.361214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.361221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.361510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.361520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.361702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.361709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.362041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.362050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.362219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.362226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.362524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.362531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.697 [2024-10-09 11:18:48.362858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.697 [2024-10-09 11:18:48.362865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.697 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.363162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.363169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.363354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.363362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.363530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.363538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.363818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.363824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.364014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.364020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.364342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.364350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.364522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.364530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.364804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.364811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.364986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.364993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.365339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.365347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.365639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.365646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.365968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.365975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.366278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.366285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.366609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.366616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.366785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.366792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.367139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.367147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.367462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.367473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.367767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.367774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.368063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.368070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.368389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.368397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.368579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.368586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.368818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.368826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.369095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.369103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.369417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.369425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.369711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.369720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.370033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.370040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.370353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.370361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.370654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.370663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.370957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.370966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.371151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.371159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.371469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.371478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.371767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.371775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.372127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.372135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.372312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.372319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.372603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.372610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.372914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.372923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.373226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.373233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.373532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.373540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.373718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.373725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.698 [2024-10-09 11:18:48.374027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.698 [2024-10-09 11:18:48.374034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.698 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.374309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.374317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.374486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.374493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.374838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.374845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.375026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.375033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.375322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.375329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.375708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.375715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.376090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.376097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.376413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.376420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.376601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.376609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.376915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.376924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.377286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.377293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.377617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.377625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.377831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.377837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.378020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.378027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.378336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.378346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.378518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.378525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.378559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.378566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.378854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.378861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.379176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.379183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.379250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.379256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.379593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.379600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.379788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.379799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.379977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.379985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.380267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.380274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.380463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.380479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.380776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.380783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.381174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.381182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.381424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.381431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.381657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.381664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.381825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.381833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.381994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.382001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.382282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.382289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.382591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.382599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.382940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.382947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.382979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.382986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.383157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.383164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.383341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.383348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.383584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.383592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.383931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.383939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.699 [2024-10-09 11:18:48.384113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.699 [2024-10-09 11:18:48.384120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.699 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.384444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.384452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.384663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.384671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.384743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.384750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.385005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.385014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.385206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.385215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.385376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.385384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.385542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.385550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.385854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.385861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.386145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.386152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.386492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.386500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.386708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.386716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.387030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.387037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.387203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.387211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.387570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.387578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.387885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.387892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.388220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.388227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.388428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.388435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.388788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.388795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.389097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.389104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.389428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.389436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.389813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.389822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.390149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.390157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.390313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.390321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.390472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.390479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.390770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.390777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.391039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.391046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.391301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.391308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.391606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.391613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.391801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.391808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.392103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.392110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.392399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.392408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.392723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.392731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.392941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.392948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.393158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.393165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.393486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.393493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.393819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.393827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.700 [2024-10-09 11:18:48.394047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.700 [2024-10-09 11:18:48.394054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.700 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.394372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.394379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.394700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.394707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.395014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.395021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.395327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.395335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.395733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.395740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.396979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.396986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.397295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.397302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.397590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.397599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.701 [2024-10-09 11:18:48.397801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.397809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.398104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.398112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:28.701 [2024-10-09 11:18:48.398435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.701 [2024-10-09 11:18:48.398444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.398644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.398653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.701 [2024-10-09 11:18:48.398816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.398824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.399135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.399142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.399304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.399311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.399632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.399640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.399963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.399971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.400284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.400293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.400455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.400461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.400670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.400676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.400981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.400988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.401199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.401206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.401512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.401519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.401821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.401828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.402198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.402206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.402243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.402250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.402555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.402562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.402877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.402884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.701 [2024-10-09 11:18:48.403192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.701 [2024-10-09 11:18:48.403198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.701 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.403471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.403478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.403805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.403811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.404092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.404099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.404265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.404271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.404536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.404543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.404760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.404766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.404940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.404946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.405118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.405125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.405510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.405517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.405786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.405793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.406119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.406126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.406452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.406459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.406776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.406783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.406947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.406954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.407263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.407271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.407585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.407593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.407899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.407906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.408224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.408231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.408398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.408406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.408753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.408759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.408946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.408953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.409230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.409237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.409388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.409395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.409691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.409698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.410008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.410016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.410322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.410328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.410496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.410503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.410545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.410551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.410704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.410712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.411025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.411032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.411374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.411381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.411677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.411684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.411980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.411987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.412317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.412324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.412635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.412643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.412964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.412971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.413289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.413296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.413606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.413613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.413957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.413964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.702 [2024-10-09 11:18:48.414287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.702 [2024-10-09 11:18:48.414293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.702 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.414453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.414460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.414648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.414655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.414835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.414842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.415076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.415082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.415414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.415422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.415712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.415719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.416035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.416042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.416335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.416342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.416605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.416612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.416898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.416905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.417210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.417217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.417470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.417477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.417772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.417779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.418093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.418100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.418269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.418276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.418598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.418606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.418790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.418797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f10000b90 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.419149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.419188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.419530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.419545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.419848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.419885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.420129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.420142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.420307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.420317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.420676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.420688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.420992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.421003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.421323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.421333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.421546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.421557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.421798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.421809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.422133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.422143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.422472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.422483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.422788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.422799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.423118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.423128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.423459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.423476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.423789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.423799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.423975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.423985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.424285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.424295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.424522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.424533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.424947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.424957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.425188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.425198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 Malloc0 00:38:28.703 [2024-10-09 11:18:48.425272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.425282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.703 qpair failed and we were unable to recover it. 00:38:28.703 [2024-10-09 11:18:48.425445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.703 [2024-10-09 11:18:48.425454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.425662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.425672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.426002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.704 [2024-10-09 11:18:48.426012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.426193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.426204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:28.704 [2024-10-09 11:18:48.426482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.426492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.426583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.426592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.704 [2024-10-09 11:18:48.426882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.426892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.704 [2024-10-09 11:18:48.427067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.427077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.427241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.427250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.427467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.427477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.427828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.427838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.428053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.428063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.428245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.428254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.428540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.428550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.428860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.428870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.429939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.429949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.430264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.430274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.430570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.430580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.430995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.431004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.431302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.431313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.431627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.431637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.431948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.431957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.432270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.432281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.432462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.432478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.432616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.704 [2024-10-09 11:18:48.432767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.432776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.433105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.433115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.433405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.433415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.433720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.433731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.433913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.433923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.434246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.434256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.434578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.434588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.434883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.434894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.435055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.435065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.435258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.435268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.704 qpair failed and we were unable to recover it. 00:38:28.704 [2024-10-09 11:18:48.435462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.704 [2024-10-09 11:18:48.435475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.435526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.435536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.435812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.435823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.435871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.435881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.436088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.436098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.436303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.436312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.436489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.436500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.436811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.436821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.437143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.437153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.437314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.437324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.437613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.437623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.437916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.437925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.438227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.438237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.438605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.438616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.438922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.438931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.439228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.439238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.439545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.439558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.439878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.439888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.440079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.440089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.440304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.440314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.440477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.440487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.440785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.440795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.441001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.441010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.441333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.441344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.441658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.441668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.705 [2024-10-09 11:18:48.441957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.441968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.705 [2024-10-09 11:18:48.442278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.442288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.705 [2024-10-09 11:18:48.442455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.442473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.705 [2024-10-09 11:18:48.442857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.442867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.443047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.443057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.443228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.443238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.443539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.443549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.443857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.443867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.444152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.705 [2024-10-09 11:18:48.444161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.705 qpair failed and we were unable to recover it. 00:38:28.705 [2024-10-09 11:18:48.444433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.444442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.444757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.444767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.445053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.445063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.445402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.445411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.445770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.445780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.446053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.446063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.446256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.446266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.446586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.446599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.446902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.446912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.447244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.447254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.447425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.447434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.447742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.447752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.448062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.448072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.448394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.448404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.448751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.448761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.449068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.449077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.449393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.449403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.449728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.449738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.450043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.450053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.450235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.450247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.450615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.450625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.450916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.450926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.451243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.451253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.451543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.451553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.451871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.451880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.452075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.452084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.452286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.452295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.452588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.452598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.452910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.452925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.453235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.453245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.453535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.453545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.706 [2024-10-09 11:18:48.453849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.453859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:28.706 [2024-10-09 11:18:48.454225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.706 [2024-10-09 11:18:48.454235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.706 [2024-10-09 11:18:48.454551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.454562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.454902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.454912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.455283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.455293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.455344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.455353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.455631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.455642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.455848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.706 [2024-10-09 11:18:48.455858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.706 qpair failed and we were unable to recover it. 00:38:28.706 [2024-10-09 11:18:48.456085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.456095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.456473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.456483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.456614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.456623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.456874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.456884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.457238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.457248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.457657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.457667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.457973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.457983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.458193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.458205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.458366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.458376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.458569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.458579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.458747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.458758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.459060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.459070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.459231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.459241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.459522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.459532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.459824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.459834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.460133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.460142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.460475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.460485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.460651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.460661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.460839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.460848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.461135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.461145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.461455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.461475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.461769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.461779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.462088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.462098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.462408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.462423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.462513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.462523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.462833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.462843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.463133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.463143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.463449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.463458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.463766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.463777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.464082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.464092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.464379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.464389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.464580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.464590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.464777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.464787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.464958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.464967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.465276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.465287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.465505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.465515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.465718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.465728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 [2024-10-09 11:18:48.466052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.466062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.707 [2024-10-09 11:18:48.466360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.707 [2024-10-09 11:18:48.466370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.707 qpair failed and we were unable to recover it. 00:38:28.707 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.707 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.708 [2024-10-09 11:18:48.466764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.466774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.467082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.467092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.467300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.467311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.467658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.467668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.467966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.467976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.468329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.468339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.468382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.468391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.468751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.468762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.469046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.469056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.469350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.469360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.469429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.469438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.469708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.469718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.470053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.470063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.470378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.470388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.470761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.470771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.470958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.470968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.471234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.471244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.471575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.471585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.471933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.471943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.472252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.472262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.472549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.472559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.472725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:28.708 [2024-10-09 11:18:48.472735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1520360 with addr=10.0.0.2, port=4420 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.472814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.708 [2024-10-09 11:18:48.483477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.483548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.483566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.708 [2024-10-09 11:18:48.483574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.708 [2024-10-09 11:18:48.483580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.708 [2024-10-09 11:18:48.483599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.708 11:18:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2132766 00:38:28.708 [2024-10-09 11:18:48.493257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.493320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.493334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.708 [2024-10-09 11:18:48.493342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.708 [2024-10-09 11:18:48.493348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.708 [2024-10-09 11:18:48.493362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.503344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.503407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.503422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.708 [2024-10-09 11:18:48.503429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.708 [2024-10-09 11:18:48.503436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.708 [2024-10-09 11:18:48.503449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.513362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.513424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.513439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.708 [2024-10-09 11:18:48.513446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.708 [2024-10-09 11:18:48.513452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.708 [2024-10-09 11:18:48.513470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.523298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.523360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.523373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.708 [2024-10-09 11:18:48.523380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.708 [2024-10-09 11:18:48.523386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.708 [2024-10-09 11:18:48.523400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.708 qpair failed and we were unable to recover it. 00:38:28.708 [2024-10-09 11:18:48.533271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.708 [2024-10-09 11:18:48.533325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.708 [2024-10-09 11:18:48.533340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.533347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.533353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.533367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.543271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.543323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.543337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.543345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.543351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.543365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.553292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.553351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.553365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.553377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.553385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.553401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.563323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.563381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.563396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.563405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.563413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.563427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.573292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.573342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.573355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.573363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.573369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.573382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.583299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.583351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.583365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.583372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.583378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.583392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.593214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.593269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.593282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.593289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.593295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.593309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.603305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.603357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.603371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.603377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.603384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.603396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.613270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.613326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.613340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.613347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.613353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.613367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.623333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.623407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.623420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.623427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.623433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.623446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.633304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.633363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.633376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.633382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.633389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.633402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.643254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.643350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.643367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.643374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.643381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.643394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.653312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.653365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.709 [2024-10-09 11:18:48.653378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.709 [2024-10-09 11:18:48.653385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.709 [2024-10-09 11:18:48.653392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.709 [2024-10-09 11:18:48.653405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.709 qpair failed and we were unable to recover it. 00:38:28.709 [2024-10-09 11:18:48.663208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.709 [2024-10-09 11:18:48.663263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.710 [2024-10-09 11:18:48.663277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.710 [2024-10-09 11:18:48.663284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.710 [2024-10-09 11:18:48.663291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.710 [2024-10-09 11:18:48.663304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.710 qpair failed and we were unable to recover it. 00:38:28.710 [2024-10-09 11:18:48.673333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.710 [2024-10-09 11:18:48.673390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.710 [2024-10-09 11:18:48.673403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.710 [2024-10-09 11:18:48.673410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.710 [2024-10-09 11:18:48.673416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.710 [2024-10-09 11:18:48.673429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.710 qpair failed and we were unable to recover it. 00:38:28.710 [2024-10-09 11:18:48.683345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.710 [2024-10-09 11:18:48.683400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.710 [2024-10-09 11:18:48.683414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.710 [2024-10-09 11:18:48.683421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.710 [2024-10-09 11:18:48.683427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.710 [2024-10-09 11:18:48.683440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.710 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.693202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.693263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.693276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.693283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.693290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.693303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.703337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.703388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.703402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.703409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.703415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.703429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.713348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.713407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.713421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.713428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.713434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.713448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.723345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.723404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.723417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.723424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.723431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.723444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.733328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.733380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.733396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.733403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.733409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.733422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.743335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.743397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.743412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.743419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.743425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.743439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.753350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.753403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.753417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.753424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.753431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.971 [2024-10-09 11:18:48.753444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.971 qpair failed and we were unable to recover it. 00:38:28.971 [2024-10-09 11:18:48.763354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.971 [2024-10-09 11:18:48.763431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.971 [2024-10-09 11:18:48.763445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.971 [2024-10-09 11:18:48.763452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.971 [2024-10-09 11:18:48.763459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.763477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.773355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.773414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.773428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.773435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.773441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.773458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.783364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.783414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.783427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.783434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.783441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.783453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.793426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.793485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.793499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.793505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.793512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.793525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.803380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.803441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.803454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.803461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.803471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.803484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.813359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.813411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.813425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.813432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.813438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.813451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.823381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.823434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.823452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.823459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.823469] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.823483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.833360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.833415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.833429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.833436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.833443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.833456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.843402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.843457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.843474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.843481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.843488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.843501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.853393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.853447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.853461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.853472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.853478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.853492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.863409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.863462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.863480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.863488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.863495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.863512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.873471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.873583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.873597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.873605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.873612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.873625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.883400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.883470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.883483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.883490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.883497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.883510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.893374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.893430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.893443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.893450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.893456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.893473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.903402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.972 [2024-10-09 11:18:48.903457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.972 [2024-10-09 11:18:48.903474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.972 [2024-10-09 11:18:48.903481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.972 [2024-10-09 11:18:48.903488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.972 [2024-10-09 11:18:48.903501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.972 qpair failed and we were unable to recover it. 00:38:28.972 [2024-10-09 11:18:48.913391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.913444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.913461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.913472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.913479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.913492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:28.973 [2024-10-09 11:18:48.923387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.923443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.923456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.923463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.923475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.923488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:28.973 [2024-10-09 11:18:48.933396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.933452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.933468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.933476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.933482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.933496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:28.973 [2024-10-09 11:18:48.943413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.943474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.943488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.943495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.943502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.943515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:28.973 [2024-10-09 11:18:48.953432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.953498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.953513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.953520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.953527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.953544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:28.973 [2024-10-09 11:18:48.963461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:28.973 [2024-10-09 11:18:48.963550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:28.973 [2024-10-09 11:18:48.963564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:28.973 [2024-10-09 11:18:48.963571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:28.973 [2024-10-09 11:18:48.963579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:28.973 [2024-10-09 11:18:48.963593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:28.973 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:48.973473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:48.973532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:48.973547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:48.973555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:48.973563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:48.973577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:48.983349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:48.983445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:48.983458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:48.983471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:48.983478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:48.983492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:48.993503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:48.993562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:48.993576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:48.993584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:48.993591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:48.993606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.003463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.003526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.003543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.003550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.003557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.003570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.013411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.013462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.013481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.013488] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.013495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.013509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.023447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.023506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.023519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.023527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.023534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.023547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.033444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.033506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.033519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.033527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.033534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.033547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.043487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.043546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.043559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.043566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.043572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.043589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.053479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.053533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.053546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.053554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.053560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.053574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.063456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.063511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.063525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.063533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.063540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.063554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.073481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.073534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.073548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.073555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.073562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.073575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.083498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.083589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.083602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.083610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.083616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.083630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.093502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.235 [2024-10-09 11:18:49.093595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.235 [2024-10-09 11:18:49.093612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.235 [2024-10-09 11:18:49.093619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.235 [2024-10-09 11:18:49.093626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.235 [2024-10-09 11:18:49.093640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.235 qpair failed and we were unable to recover it. 00:38:29.235 [2024-10-09 11:18:49.103497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.103550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.103563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.103570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.103577] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.103591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.113539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.113594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.113609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.113616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.113622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.113636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.123510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.123565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.123579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.123586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.123593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.123606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.133496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.133550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.133564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.133571] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.133581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.133595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.143486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.143538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.143552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.143559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.143565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.143579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.153423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.153515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.153536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.153543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.153550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.153564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.163587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.163655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.163671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.163678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.163684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.163698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.173487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.173540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.173553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.173561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.173567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.173581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.183523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.183581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.183595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.183602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.183608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.183622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.193422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.193484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.193497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.193505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.193511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.193525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.203569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.203623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.203636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.203644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.203650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.203664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.213553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.213603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.213617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.213624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.213631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.213645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.223556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.223616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.223629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.223636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.223646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.223661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.236 [2024-10-09 11:18:49.233557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.236 [2024-10-09 11:18:49.233611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.236 [2024-10-09 11:18:49.233624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.236 [2024-10-09 11:18:49.233631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.236 [2024-10-09 11:18:49.233638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.236 [2024-10-09 11:18:49.233651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.236 qpair failed and we were unable to recover it. 00:38:29.498 [2024-10-09 11:18:49.243549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.498 [2024-10-09 11:18:49.243608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.498 [2024-10-09 11:18:49.243621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.498 [2024-10-09 11:18:49.243629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.243635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.243649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.253559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.253616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.253629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.253636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.253643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.253656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.263553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.263605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.263619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.263626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.263633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.263646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.273499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.273566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.273580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.273587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.273594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.273607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.283475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.283537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.283550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.283557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.283564] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.283577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.293566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.293617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.293630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.293637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.293644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.293657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.303593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.303696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.303709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.303717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.303723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.303737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.313596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.313650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.313664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.313672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.313682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.313695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.323614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.323667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.323680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.323687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.323694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.323707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.333489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.333545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.333558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.333565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.333572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.333586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.343600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.343652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.343667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.343674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.343681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.343696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.353540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.353592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.353606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.353614] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.353621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.353634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.363619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.363679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.363695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.363703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.363711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.363725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.373603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.373654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.373668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.373675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.373682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.499 [2024-10-09 11:18:49.373696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.499 qpair failed and we were unable to recover it. 00:38:29.499 [2024-10-09 11:18:49.383556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.499 [2024-10-09 11:18:49.383616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.499 [2024-10-09 11:18:49.383629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.499 [2024-10-09 11:18:49.383636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.499 [2024-10-09 11:18:49.383643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.383656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.393610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.393701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.393715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.393722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.393729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.393743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.403642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.403697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.403710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.403717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.403730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.403744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.413598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.413651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.413665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.413672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.413679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.413692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.423604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.423665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.423678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.423685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.423692] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.423705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.433635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.433690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.433704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.433711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.433718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.433732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.443655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.443711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.443724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.443731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.443738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.443751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.453653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.453705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.453719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.453726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.453733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.453746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.463645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.463699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.463713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.463720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.463727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.463740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.473649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.473708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.473722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.473729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.473735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.473748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.483593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.483692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.483706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.483713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.483720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.483733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.500 [2024-10-09 11:18:49.493649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.500 [2024-10-09 11:18:49.493700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.500 [2024-10-09 11:18:49.493714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.500 [2024-10-09 11:18:49.493724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.500 [2024-10-09 11:18:49.493731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.500 [2024-10-09 11:18:49.493744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.500 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.503672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.503750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.503763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.503771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.503778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.503792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.513685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.513739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.513753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.513760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.513767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.513780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.523694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.523755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.523768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.523776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.523783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.523796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.533693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.533745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.533759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.533766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.533773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.533786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.543686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.543752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.543765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.543772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.543779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.543793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.553710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.553775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.553788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.553795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.553802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.553816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.563697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.563755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.563769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.563776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.563783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.563797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.573584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.573637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.573652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.573659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.573666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.573679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.583708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.583760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.583773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.583784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.583792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.583806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.593680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.593737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.593750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.593757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.593765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.593779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.603730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.603794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.603807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.603814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.603821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.603834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.613711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.613765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.613779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.613786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.613793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.613806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.623706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.623759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.623772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.763 [2024-10-09 11:18:49.623779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.763 [2024-10-09 11:18:49.623786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.763 [2024-10-09 11:18:49.623799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.763 qpair failed and we were unable to recover it. 00:38:29.763 [2024-10-09 11:18:49.633752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.763 [2024-10-09 11:18:49.633824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.763 [2024-10-09 11:18:49.633838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.633846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.633852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.633867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.643695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.643746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.643760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.643767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.643773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.643787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.653605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.653654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.653667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.653674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.653681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.653694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.663731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.663784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.663798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.663806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.663812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.663826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.673723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.673781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.673794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.673805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.673811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.673825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.683749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.683834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.683847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.683855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.683862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.683875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.693731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.693828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.693842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.693849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.693856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.693869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.703716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.703773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.703786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.703793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.703800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.703813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.713771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.713858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.713872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.713880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.713886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.713900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.723756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.723807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.723821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.723828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.723835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.723848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.733762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.733810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.733823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.733831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.733838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.733851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.743761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.743817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.743831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.743838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.743844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.743858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:29.764 [2024-10-09 11:18:49.753755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:29.764 [2024-10-09 11:18:49.753813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:29.764 [2024-10-09 11:18:49.753826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:29.764 [2024-10-09 11:18:49.753833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:29.764 [2024-10-09 11:18:49.753840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:29.764 [2024-10-09 11:18:49.753853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:29.764 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.763771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.763827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.763841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.763852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.763859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.763873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.027 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.773793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.773845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.773858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.773865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.773872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.773885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.027 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.783740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.783790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.783805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.783812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.783818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.783832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.027 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.793799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.793855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.793868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.793875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.793882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.793895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.027 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.803785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.803846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.803859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.803866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.803873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.803886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.027 qpair failed and we were unable to recover it. 00:38:30.027 [2024-10-09 11:18:49.813767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.027 [2024-10-09 11:18:49.813819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.027 [2024-10-09 11:18:49.813833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.027 [2024-10-09 11:18:49.813840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.027 [2024-10-09 11:18:49.813847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.027 [2024-10-09 11:18:49.813861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.823762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.823815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.823828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.823835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.823842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.823856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.833793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.833848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.833861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.833869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.833876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.833889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.843687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.843749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.843762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.843769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.843776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.843790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.853808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.853858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.853874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.853881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.853888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.853901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.863816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.863874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.863888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.863895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.863902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.863916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.873790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.873845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.873859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.873866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.873872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.873885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.883836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.883897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.883915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.883923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.883932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.883947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.893835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.893884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.893899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.893907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.893913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.893927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.903835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.903885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.903898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.903906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.903912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.903925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.913849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.913909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.913924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.913931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.913938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.913952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.923785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.923848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.923862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.923869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.923875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.923889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.933849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.933908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.933921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.933929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.933935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.933949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.943825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.943877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.943894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.943901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.943908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.943922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.953745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.028 [2024-10-09 11:18:49.953843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.028 [2024-10-09 11:18:49.953857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.028 [2024-10-09 11:18:49.953864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.028 [2024-10-09 11:18:49.953872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.028 [2024-10-09 11:18:49.953885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.028 qpair failed and we were unable to recover it. 00:38:30.028 [2024-10-09 11:18:49.963876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:49.963931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:49.963945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:49.963952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:49.963959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:49.963972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:49.973791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:49.973888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:49.973904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:49.973911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:49.973918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:49.973932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:49.983876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:49.983926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:49.983939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:49.983947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:49.983954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:49.983971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:49.993851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:49.993903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:49.993919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:49.993926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:49.993933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:49.993947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:50.003910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:50.004011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:50.004026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:50.004034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:50.004040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:50.004054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:50.013761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:50.013822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:50.013837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:50.013845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:50.013852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:50.013869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.029 [2024-10-09 11:18:50.023826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.029 [2024-10-09 11:18:50.023911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.029 [2024-10-09 11:18:50.023926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.029 [2024-10-09 11:18:50.023934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.029 [2024-10-09 11:18:50.023941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.029 [2024-10-09 11:18:50.023956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.029 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.033768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.033837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.033855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.033863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.033869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.033884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.043908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.043964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.043977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.043984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.043991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.044005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.053945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.053999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.054012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.054019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.054026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.054040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.063895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.063951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.063965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.063973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.063979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.063993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.073900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.073956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.073970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.073977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.073984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.074004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.291 [2024-10-09 11:18:50.083940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.291 [2024-10-09 11:18:50.083999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.291 [2024-10-09 11:18:50.084012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.291 [2024-10-09 11:18:50.084019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.291 [2024-10-09 11:18:50.084026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.291 [2024-10-09 11:18:50.084040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.291 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.093918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.093976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.093989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.093997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.094003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.094016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.103823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.103871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.103884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.103891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.103898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.103912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.113925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.113981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.113995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.114002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.114009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.114022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.123934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.123987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.124004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.124011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.124018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.124031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.133941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.134019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.134032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.134039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.134048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.134061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.143944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.143995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.144009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.144016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.144023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.144037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.153980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.154061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.154074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.154081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.154089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.154102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.163974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.164077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.164092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.164099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.164107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.164125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.173935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.173990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.174003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.174010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.174017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.174030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.183977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.184052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.184065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.184072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.184079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.184092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.193978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.194032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.194046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.194053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.194060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.194073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.203853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.203910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.203924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.203932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.203938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.203952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.213980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.214035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.214052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.214059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.214066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.214079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.223971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.292 [2024-10-09 11:18:50.224024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.292 [2024-10-09 11:18:50.224037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.292 [2024-10-09 11:18:50.224045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.292 [2024-10-09 11:18:50.224051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.292 [2024-10-09 11:18:50.224064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.292 qpair failed and we were unable to recover it. 00:38:30.292 [2024-10-09 11:18:50.233968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.234019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.234032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.234039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.234046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.234059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.293 [2024-10-09 11:18:50.243993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.244049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.244063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.244070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.244076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.244089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.293 [2024-10-09 11:18:50.253984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.254076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.254090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.254097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.254104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.254121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.293 [2024-10-09 11:18:50.263897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.263951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.263965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.263973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.263979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.263992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.293 [2024-10-09 11:18:50.273996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.274052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.274065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.274072] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.274079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.274092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.293 [2024-10-09 11:18:50.284002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.293 [2024-10-09 11:18:50.284058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.293 [2024-10-09 11:18:50.284072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.293 [2024-10-09 11:18:50.284079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.293 [2024-10-09 11:18:50.284086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.293 [2024-10-09 11:18:50.284099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.293 qpair failed and we were unable to recover it. 00:38:30.555 [2024-10-09 11:18:50.293978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.555 [2024-10-09 11:18:50.294040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.555 [2024-10-09 11:18:50.294054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.555 [2024-10-09 11:18:50.294061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.555 [2024-10-09 11:18:50.294068] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.555 [2024-10-09 11:18:50.294081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.555 qpair failed and we were unable to recover it. 00:38:30.555 [2024-10-09 11:18:50.304002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.555 [2024-10-09 11:18:50.304059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.555 [2024-10-09 11:18:50.304076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.555 [2024-10-09 11:18:50.304083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.555 [2024-10-09 11:18:50.304091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.555 [2024-10-09 11:18:50.304105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.555 qpair failed and we were unable to recover it. 00:38:30.555 [2024-10-09 11:18:50.314021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.555 [2024-10-09 11:18:50.314113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.555 [2024-10-09 11:18:50.314127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.555 [2024-10-09 11:18:50.314134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.555 [2024-10-09 11:18:50.314141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.555 [2024-10-09 11:18:50.314154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.555 qpair failed and we were unable to recover it. 00:38:30.555 [2024-10-09 11:18:50.324096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.555 [2024-10-09 11:18:50.324196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.555 [2024-10-09 11:18:50.324210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.555 [2024-10-09 11:18:50.324217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.555 [2024-10-09 11:18:50.324224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.555 [2024-10-09 11:18:50.324238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.555 qpair failed and we were unable to recover it. 00:38:30.555 [2024-10-09 11:18:50.333898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.555 [2024-10-09 11:18:50.333962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.555 [2024-10-09 11:18:50.333976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.333983] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.333989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.334003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.344001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.344054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.344072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.344080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.344090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.344105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.354002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.354087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.354101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.354108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.354115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.354129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.364036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.364096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.364110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.364117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.364124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.364138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.373995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.374048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.374061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.374068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.374075] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.374088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.384030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.384082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.384096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.384103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.384110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.384123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.393997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.394055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.394069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.394076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.394083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.394096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.404080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.404144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.404159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.404166] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.404173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.404186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.413942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.413996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.414012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.414019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.414026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.414040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.424025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.424117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.424131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.424138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.424145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.424159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.434046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.434095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.434108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.434115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.434126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.434139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.444085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.444143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.444156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.444163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.444170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.444183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.454062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.454117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.454130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.454137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.454144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.454157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.464070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.464123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.464139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.464146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.464152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.556 [2024-10-09 11:18:50.464166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.556 qpair failed and we were unable to recover it. 00:38:30.556 [2024-10-09 11:18:50.474209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.556 [2024-10-09 11:18:50.474262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.556 [2024-10-09 11:18:50.474275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.556 [2024-10-09 11:18:50.474282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.556 [2024-10-09 11:18:50.474289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.474302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.484088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.484153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.484178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.484187] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.484195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.484214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.494059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.494117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.494142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.494151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.494158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.494176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.503995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.504095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.504110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.504117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.504124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.504138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.514080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.514132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.514148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.514155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.514162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.514177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.524084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.524151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.524165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.524172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.524183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.524196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.534080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.534138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.534163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.534171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.534179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.534197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.544086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.544150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.544165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.544172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.544179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.544193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.557 [2024-10-09 11:18:50.554113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.557 [2024-10-09 11:18:50.554168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.557 [2024-10-09 11:18:50.554185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.557 [2024-10-09 11:18:50.554192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.557 [2024-10-09 11:18:50.554199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.557 [2024-10-09 11:18:50.554214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.557 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.564100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.564166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.564191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.564200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.564207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.564225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.573986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.574051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.574067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.574074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.574081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.574095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.584120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.584179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.584193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.584200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.584207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.584221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.594126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.594218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.594244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.594252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.594259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.594278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.604185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.604264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.604290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.604299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.604306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.604325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.614111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.614161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.614178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.614186] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.614198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.614215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.624102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.624170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.624185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.624192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.624198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.624212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.634158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.634255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.634269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.634276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.634282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.634297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.644069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.644122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.644135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.644142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.644149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.644162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.654161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.654214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.654228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.654235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.654242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.654255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.664161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.664220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.664246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.664255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.664262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.664280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.674165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.674223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.819 [2024-10-09 11:18:50.674238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.819 [2024-10-09 11:18:50.674246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.819 [2024-10-09 11:18:50.674252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.819 [2024-10-09 11:18:50.674267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.819 qpair failed and we were unable to recover it. 00:38:30.819 [2024-10-09 11:18:50.684170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.819 [2024-10-09 11:18:50.684223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.684237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.684244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.684251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.684265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.694042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.694097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.694111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.694118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.694125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.694138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.704138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.704186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.704200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.704211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.704218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.704232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.714199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.714266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.714291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.714300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.714308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.714327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.724204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.724261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.724276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.724284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.724291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.724305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.734154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.734215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.734228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.734236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.734242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.734256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.744040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.744093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.744106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.744113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.744120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.744133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.754222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.754279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.754293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.754300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.754307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.754321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.764222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.764285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.764300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.764307] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.764313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.764327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.774191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.774244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.774258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.774265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.774272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.774285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.784162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.784209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.784223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.784230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.784237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.784250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.794222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.794278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.794292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.794302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.794309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.794323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.804239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.804294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.804307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.804314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.804321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.804335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:30.820 [2024-10-09 11:18:50.814226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:30.820 [2024-10-09 11:18:50.814318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:30.820 [2024-10-09 11:18:50.814333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:30.820 [2024-10-09 11:18:50.814340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:30.820 [2024-10-09 11:18:50.814347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:30.820 [2024-10-09 11:18:50.814361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:30.820 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.824176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.824227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.824242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.824249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.824256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.824270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.834238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.834294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.834307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.834315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.834322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.834336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.844306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.844373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.844387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.844394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.844400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.844414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.854246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.854350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.854364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.854371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.854378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.854391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.864186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.864232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.864246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.864253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.864260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.864273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.874133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.874210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.874224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.874231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.874238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.874252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.884246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.884316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.884330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.884346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.884352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.884365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.083 qpair failed and we were unable to recover it. 00:38:31.083 [2024-10-09 11:18:50.894237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.083 [2024-10-09 11:18:50.894300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.083 [2024-10-09 11:18:50.894313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.083 [2024-10-09 11:18:50.894320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.083 [2024-10-09 11:18:50.894327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.083 [2024-10-09 11:18:50.894340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.904237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.904333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.904347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.904354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.904361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.904375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.914292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.914349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.914364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.914371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.914377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.914391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.924166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.924226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.924239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.924247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.924254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.924267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.934286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.934340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.934353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.934360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.934367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.934380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.944242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.944295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.944309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.944316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.944323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.944335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.954301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.954361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.954374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.954381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.954388] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.954401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.964313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.964367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.964381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.964388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.964395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.964408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.974287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.974341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.974355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.974366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.974373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.974386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.984191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.984292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.984308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.984316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.984322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.984337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:50.994327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:50.994382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:50.994396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:50.994404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:50.994411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:50.994424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:51.004195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:51.004249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:51.004263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:51.004271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:51.004277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:51.004291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:51.014274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:51.014328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:51.014342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:51.014350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:51.014357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:51.014370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:51.024275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:51.024327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:51.024341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:51.024349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:51.024356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:51.024370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:51.034319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.084 [2024-10-09 11:18:51.034378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.084 [2024-10-09 11:18:51.034392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.084 [2024-10-09 11:18:51.034399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.084 [2024-10-09 11:18:51.034406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.084 [2024-10-09 11:18:51.034420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.084 qpair failed and we were unable to recover it. 00:38:31.084 [2024-10-09 11:18:51.044339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.085 [2024-10-09 11:18:51.044401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.085 [2024-10-09 11:18:51.044417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.085 [2024-10-09 11:18:51.044424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.085 [2024-10-09 11:18:51.044431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.085 [2024-10-09 11:18:51.044449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-09 11:18:51.054335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.085 [2024-10-09 11:18:51.054394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.085 [2024-10-09 11:18:51.054408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.085 [2024-10-09 11:18:51.054415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.085 [2024-10-09 11:18:51.054422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.085 [2024-10-09 11:18:51.054436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-09 11:18:51.064285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.085 [2024-10-09 11:18:51.064329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.085 [2024-10-09 11:18:51.064346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.085 [2024-10-09 11:18:51.064354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.085 [2024-10-09 11:18:51.064361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.085 [2024-10-09 11:18:51.064375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.085 [2024-10-09 11:18:51.074334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.085 [2024-10-09 11:18:51.074390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.085 [2024-10-09 11:18:51.074404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.085 [2024-10-09 11:18:51.074411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.085 [2024-10-09 11:18:51.074418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.085 [2024-10-09 11:18:51.074431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.085 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.084351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.084406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.084420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.084427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.084434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.084447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.094262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.094314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.094327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.094335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.094342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.094355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.104290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.104336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.104350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.104358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.104365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.104379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.114233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.114293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.114307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.114314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.114321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.114335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.124349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.124402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.124415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.124422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.124429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.124442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.134382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.134447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.134460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.134472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.134479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.134493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.144335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.144380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.144394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.144401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.144408] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.144422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.154286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.154382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.154399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.154407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.154414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.154427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.164224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.164276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.164291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.164298] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.164305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.164319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.174346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.174391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.174405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.174412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.174419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.174432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.348 [2024-10-09 11:18:51.184372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.348 [2024-10-09 11:18:51.184420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.348 [2024-10-09 11:18:51.184433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.348 [2024-10-09 11:18:51.184441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.348 [2024-10-09 11:18:51.184448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.348 [2024-10-09 11:18:51.184461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.348 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.194395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.194471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.194485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.194492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.194498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.194516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.204365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.204416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.204429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.204436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.204443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.204456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.214352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.214401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.214415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.214422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.214428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.214442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.224216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.224267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.224281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.224288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.224295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.224308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.234400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.234454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.234472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.234480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.234487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.234501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.244380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.244431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.244447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.244454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.244461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.244479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.254350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.254399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.254412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.254419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.254426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.254440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.264354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.264406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.264420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.264427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.264434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.264448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.274296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.274356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.274370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.274377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.274384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.274397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.284396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.284446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.284460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.284471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.284478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.284495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.294257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.294305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.294319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.294326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.294333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.294346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.304363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.304413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.304426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.304433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.304440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.304454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.314443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.314509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.314523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.314530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.314536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.314550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.324410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.324457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.349 [2024-10-09 11:18:51.324474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.349 [2024-10-09 11:18:51.324482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.349 [2024-10-09 11:18:51.324489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.349 [2024-10-09 11:18:51.324502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.349 qpair failed and we were unable to recover it. 00:38:31.349 [2024-10-09 11:18:51.334394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.349 [2024-10-09 11:18:51.334488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.350 [2024-10-09 11:18:51.334504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.350 [2024-10-09 11:18:51.334512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.350 [2024-10-09 11:18:51.334519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.350 [2024-10-09 11:18:51.334532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.350 qpair failed and we were unable to recover it. 00:38:31.350 [2024-10-09 11:18:51.344273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.350 [2024-10-09 11:18:51.344323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.350 [2024-10-09 11:18:51.344339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.350 [2024-10-09 11:18:51.344346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.350 [2024-10-09 11:18:51.344353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.350 [2024-10-09 11:18:51.344368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.350 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.354458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.354519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.354533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.354540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.354547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.354561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.364477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.364557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.364571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.364578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.364585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.364599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.374410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.374460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.374478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.374485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.374492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.374510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.384436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.384484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.384498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.384505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.384512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.384526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.394351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.394407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.394420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.394428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.394434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.394448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.404318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.404371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.404386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.404394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.404401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.404414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.414421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.414475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.414489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.414497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.414503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.414518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.424428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.424484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.424501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.424508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.424515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.424529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.434485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.434564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.434577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.434584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.434591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.434605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.444436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.444502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.444517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.444524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.444531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.444545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.454419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.454471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.454485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.454492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.454499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.612 [2024-10-09 11:18:51.454512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.612 qpair failed and we were unable to recover it. 00:38:31.612 [2024-10-09 11:18:51.464446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.612 [2024-10-09 11:18:51.464499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.612 [2024-10-09 11:18:51.464513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.612 [2024-10-09 11:18:51.464521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.612 [2024-10-09 11:18:51.464527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.464545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.474485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.474542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.474555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.474563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.474570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.474583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.484459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.484518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.484532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.484539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.484545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.484559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.494446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.494499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.494513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.494520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.494527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.494540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.504447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.504548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.504563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.504570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.504578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.504591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.514513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.514569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.514586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.514593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.514600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.514614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.524480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.524533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.524547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.524554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.524561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.524574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.534428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.534482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.534496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.534503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.534510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.534523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.544487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.544536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.544550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.544557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.544563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.544578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.554544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.554623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.554636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.554644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.554658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.554672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.564470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.564529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.564543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.564551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.564557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.564571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.574478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.574523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.574537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.574544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.574551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.574565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.584446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.584495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.584508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.584515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.584522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.584535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.594514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.594569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.594582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.594589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.594596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.594610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.613 [2024-10-09 11:18:51.604367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.613 [2024-10-09 11:18:51.604423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.613 [2024-10-09 11:18:51.604436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.613 [2024-10-09 11:18:51.604444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.613 [2024-10-09 11:18:51.604450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.613 [2024-10-09 11:18:51.604468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.613 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.614492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.614565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.614579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.614586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.875 [2024-10-09 11:18:51.614593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.875 [2024-10-09 11:18:51.614607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.875 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.624400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.624446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.624460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.624470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.875 [2024-10-09 11:18:51.624478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.875 [2024-10-09 11:18:51.624491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.875 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.634570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.634640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.634653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.634660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.875 [2024-10-09 11:18:51.634667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.875 [2024-10-09 11:18:51.634681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.875 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.644567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.644656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.644670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.644677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.875 [2024-10-09 11:18:51.644687] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.875 [2024-10-09 11:18:51.644700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.875 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.654515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.654566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.654580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.654588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.875 [2024-10-09 11:18:51.654594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.875 [2024-10-09 11:18:51.654608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.875 qpair failed and we were unable to recover it. 00:38:31.875 [2024-10-09 11:18:51.664519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.875 [2024-10-09 11:18:51.664589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.875 [2024-10-09 11:18:51.664605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.875 [2024-10-09 11:18:51.664612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.664621] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.664636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.674538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.674596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.674610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.674617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.674624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.674638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.684545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.684597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.684611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.684618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.684625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.684639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.694534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.694586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.694599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.694606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.694613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.694627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.704422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.704470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.704483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.704490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.704497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.704511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.714577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.714633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.714647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.714654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.714661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.714675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.724549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.724600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.724613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.724621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.724627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.724641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.734550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.734601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.734614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.734621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.734632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.734645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.744560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.744611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.744624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.744632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.744638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.744651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.754612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.754669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.754682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.754689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.754697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.754710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.764603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.764656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.764670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.764678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.764684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.764698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.774563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.774644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.774657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.774664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.774671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.774685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.784442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.784498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.784512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.784519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.784526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.784539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.794617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.794671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.794684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.794691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.794698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.794712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.876 qpair failed and we were unable to recover it. 00:38:31.876 [2024-10-09 11:18:51.804563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.876 [2024-10-09 11:18:51.804617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.876 [2024-10-09 11:18:51.804630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.876 [2024-10-09 11:18:51.804637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.876 [2024-10-09 11:18:51.804644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.876 [2024-10-09 11:18:51.804657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.814576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.814626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.814640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.814647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.814653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.814666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.824559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.824603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.824616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.824623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.824633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.824646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.834685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.834792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.834806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.834813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.834819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.834832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.844476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.844533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.844546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.844554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.844560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.844573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.854611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.854660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.854673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.854680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.854686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.854699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.864469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.864547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.864563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.864570] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.864578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.864592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:31.877 [2024-10-09 11:18:51.874629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:31.877 [2024-10-09 11:18:51.874709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:31.877 [2024-10-09 11:18:51.874723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:31.877 [2024-10-09 11:18:51.874730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:31.877 [2024-10-09 11:18:51.874737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:31.877 [2024-10-09 11:18:51.874750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:31.877 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.884621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.884676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.884690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.884697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.884703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.884717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.894523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.894574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.894587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.894594] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.894601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.894614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.904629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.904681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.904694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.904701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.904708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.904721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.914679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.914735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.914749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.914760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.914767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.914780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.924643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.924695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.924708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.924716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.924722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.924736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.934672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.934722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.934736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.934743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.934749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.934762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.944599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.944649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.944662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.944670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.944676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.944690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.954646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.954701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.954714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.954721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.954728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.954741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.964526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.964581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.964595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.964602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.964609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.964623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.974642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.974693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.974706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.974713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.974720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.974733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.984653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.984703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.984716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.984723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.984730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.984743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:51.994701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:51.994800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:51.994814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:51.994821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:51.994828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:51.994841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:52.004696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:52.004744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:52.004758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:52.004768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:52.004775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.139 [2024-10-09 11:18:52.004789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.139 qpair failed and we were unable to recover it. 00:38:32.139 [2024-10-09 11:18:52.014626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.139 [2024-10-09 11:18:52.014676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.139 [2024-10-09 11:18:52.014690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.139 [2024-10-09 11:18:52.014698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.139 [2024-10-09 11:18:52.014705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.014718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.024724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.024779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.024792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.024799] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.024806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.024819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.034716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.034769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.034782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.034789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.034796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.034809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.044677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.044731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.044744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.044751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.044758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.044771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.054667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.054757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.054770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.054778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.054785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.054798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.064696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.064775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.064789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.064796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.064804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.064818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.074744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.074801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.074815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.074823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.074830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.074843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.084725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.084807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.084820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.084828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.084835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.084849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.094573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.094619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.094633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.094644] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.094651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.094664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.104696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.104747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.104760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.104767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.104774] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.104787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.114752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.114831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.114845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.114852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.114859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.114873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.124591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.124644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.124657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.124664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.124671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.124684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.140 [2024-10-09 11:18:52.134691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.140 [2024-10-09 11:18:52.134740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.140 [2024-10-09 11:18:52.134753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.140 [2024-10-09 11:18:52.134760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.140 [2024-10-09 11:18:52.134767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.140 [2024-10-09 11:18:52.134780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.140 qpair failed and we were unable to recover it. 00:38:32.402 [2024-10-09 11:18:52.144719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.402 [2024-10-09 11:18:52.144764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.402 [2024-10-09 11:18:52.144777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.402 [2024-10-09 11:18:52.144784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.402 [2024-10-09 11:18:52.144791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.402 [2024-10-09 11:18:52.144804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.402 qpair failed and we were unable to recover it. 00:38:32.402 [2024-10-09 11:18:52.154748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.402 [2024-10-09 11:18:52.154804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.402 [2024-10-09 11:18:52.154817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.402 [2024-10-09 11:18:52.154824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.402 [2024-10-09 11:18:52.154831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.402 [2024-10-09 11:18:52.154845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.402 qpair failed and we were unable to recover it. 00:38:32.402 [2024-10-09 11:18:52.164747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.402 [2024-10-09 11:18:52.164820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.164837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.164845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.164855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.164870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.174706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.174758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.174772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.174779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.174786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.174800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.184753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.184831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.184844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.184856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.184863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.184877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.194733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.194791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.194804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.194812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.194818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.194832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.204745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.204797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.204810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.204817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.204824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.204837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.214733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.214787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.214802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.214809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.214815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.214829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.224738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.224786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.224799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.224806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.224813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.224827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.234797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.234852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.234865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.234872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.234879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.234892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.244766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.244817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.244831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.244838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.244844] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.244858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.254765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.254822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.254835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.254843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.254850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.254863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.264770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.264814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.264829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.264836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.264843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.264857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.274714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.274773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.274791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.274798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.274805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.274818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.284790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.284841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.284855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.284863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.284870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.284883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.294651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.294705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.294718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.294725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.403 [2024-10-09 11:18:52.294732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.403 [2024-10-09 11:18:52.294745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.403 qpair failed and we were unable to recover it. 00:38:32.403 [2024-10-09 11:18:52.304765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.403 [2024-10-09 11:18:52.304841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.403 [2024-10-09 11:18:52.304854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.403 [2024-10-09 11:18:52.304861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.304868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.304882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.314805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.314861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.314875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.314882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.314889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.314902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.324790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.324839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.324852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.324860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.324866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.324880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.334778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.334829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.334842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.334850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.334857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.334870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.344791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.344850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.344865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.344873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.344879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.344893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.354812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.354883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.354897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.354904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.354911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.354924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.364814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.364861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.364882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.364889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.364896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.364909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.374802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.374863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.374876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.374884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.374890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.374904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.384798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.384886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.384899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.384906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.384913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.384926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.404 [2024-10-09 11:18:52.394872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.404 [2024-10-09 11:18:52.394932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.404 [2024-10-09 11:18:52.394946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.404 [2024-10-09 11:18:52.394953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.404 [2024-10-09 11:18:52.394960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.404 [2024-10-09 11:18:52.394973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.404 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.404826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.404883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.404896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.404903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.404910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.404927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.414809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.414853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.414867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.414874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.414881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.414894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.424789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.424839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.424852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.424859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.424866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.424879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.434855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.434909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.434922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.434929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.434936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.434949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.444812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.444906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.444920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.444927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.444934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.444948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.454844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.454933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.454950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.454958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.454964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.454977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.464844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.464898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.464912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.464919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.464926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.464939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.474894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.474978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.474991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.474998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.475006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.475019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.484855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.484948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.484963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.484971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.484978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.484992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.666 [2024-10-09 11:18:52.494708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.666 [2024-10-09 11:18:52.494755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.666 [2024-10-09 11:18:52.494769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.666 [2024-10-09 11:18:52.494776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.666 [2024-10-09 11:18:52.494782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.666 [2024-10-09 11:18:52.494799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.666 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.504848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.504897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.504911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.504918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.504924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.504937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.514912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.514969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.514983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.514990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.514998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.515011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.524766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.524821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.524835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.524842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.524849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.524863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.534862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.534915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.534930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.534937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.534945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.534962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.544847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.544894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.544911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.544918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.544925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.544938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.554950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.555007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.555020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.555027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.555035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.555048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.564881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.564934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.564948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.564955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.564961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.564975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.574867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.574916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.574929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.574937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.574943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.574957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.584931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.584981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.584994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.585002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.585008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.585025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.594805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.594869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.594883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.594890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.594897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.594909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.604906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.605003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.605017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.605024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.605031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.605044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.614859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.614908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.614922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.614930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.614936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.614949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.624883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.624933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.624947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.624954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.624961] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.624974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.634944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.634998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.667 [2024-10-09 11:18:52.635014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.667 [2024-10-09 11:18:52.635022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.667 [2024-10-09 11:18:52.635028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.667 [2024-10-09 11:18:52.635041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.667 qpair failed and we were unable to recover it. 00:38:32.667 [2024-10-09 11:18:52.644914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.667 [2024-10-09 11:18:52.644964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.668 [2024-10-09 11:18:52.644978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.668 [2024-10-09 11:18:52.644985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.668 [2024-10-09 11:18:52.644992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.668 [2024-10-09 11:18:52.645005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.668 qpair failed and we were unable to recover it. 00:38:32.668 [2024-10-09 11:18:52.654911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.668 [2024-10-09 11:18:52.654956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.668 [2024-10-09 11:18:52.654970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.668 [2024-10-09 11:18:52.654977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.668 [2024-10-09 11:18:52.654983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.668 [2024-10-09 11:18:52.654996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.668 qpair failed and we were unable to recover it. 00:38:32.668 [2024-10-09 11:18:52.664861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.668 [2024-10-09 11:18:52.664911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.668 [2024-10-09 11:18:52.664925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.668 [2024-10-09 11:18:52.664933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.668 [2024-10-09 11:18:52.664940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.668 [2024-10-09 11:18:52.664953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.668 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.674960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.675028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.675042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.675049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.675056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.675073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.684931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.684981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.684995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.685002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.685010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.685023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.694923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.694970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.694984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.694991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.694998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.695012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.704923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.704996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.705009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.705017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.705023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.705036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.714965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.715018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.715032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.715039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.715046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.715060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.724948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.929 [2024-10-09 11:18:52.725039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.929 [2024-10-09 11:18:52.725056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.929 [2024-10-09 11:18:52.725064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.929 [2024-10-09 11:18:52.725071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.929 [2024-10-09 11:18:52.725084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.929 qpair failed and we were unable to recover it. 00:38:32.929 [2024-10-09 11:18:52.734925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.734970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.734984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.734991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.734998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.735011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.744908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.744958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.744971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.744979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.744985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.744999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.754988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.755046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.755059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.755066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.755073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.755087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.764960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.765020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.765034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.765041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.765051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.765065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.774951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.774999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.775013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.775020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.775027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.775040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.784954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.785009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.785022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.785029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.785036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.785049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.795013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.795072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.795086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.795093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.795099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.795112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.804970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.805028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.805042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.805049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.805056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.805069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.814868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.814919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.814934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.814941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.814948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.814962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.824937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.825034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.825047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.825055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.825062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.825075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.834948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.835006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.835019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.835027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.835033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.835047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.844996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.845052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.845066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.845073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.845079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.845093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.855017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.855068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.855081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.855089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.855099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.855113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.864907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.864953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.864967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.930 [2024-10-09 11:18:52.864974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.930 [2024-10-09 11:18:52.864981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.930 [2024-10-09 11:18:52.864995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.930 qpair failed and we were unable to recover it. 00:38:32.930 [2024-10-09 11:18:52.875028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.930 [2024-10-09 11:18:52.875085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.930 [2024-10-09 11:18:52.875099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.875106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.875113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.875126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:32.931 [2024-10-09 11:18:52.884882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.931 [2024-10-09 11:18:52.884938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.931 [2024-10-09 11:18:52.884951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.884959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.884965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.884979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:32.931 [2024-10-09 11:18:52.895036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.931 [2024-10-09 11:18:52.895116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.931 [2024-10-09 11:18:52.895130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.895137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.895144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.895158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:32.931 [2024-10-09 11:18:52.904965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.931 [2024-10-09 11:18:52.905018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.931 [2024-10-09 11:18:52.905033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.905040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.905046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.905061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:32.931 [2024-10-09 11:18:52.914926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.931 [2024-10-09 11:18:52.914978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.931 [2024-10-09 11:18:52.914993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.915000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.915007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.915021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:32.931 [2024-10-09 11:18:52.925001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:32.931 [2024-10-09 11:18:52.925054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:32.931 [2024-10-09 11:18:52.925068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:32.931 [2024-10-09 11:18:52.925075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:32.931 [2024-10-09 11:18:52.925082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:32.931 [2024-10-09 11:18:52.925095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.931 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.934989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.935033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.935047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.935054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.935061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.935074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.945007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.945054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.945067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.945074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.945085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.945098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.955073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.955130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.955143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.955151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.955157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.955171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.964912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.964959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.964974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.964981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.964988] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.965001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.975034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.975088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.975101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.975108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.975116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.975129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.984931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.984993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.985006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.985013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.985020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.985033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:52.995036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:52.995094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:52.995108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:52.995116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:52.995122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:52.995136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.005063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.005155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.005168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.005176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.005183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.005196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.015002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.015052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.015066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.015073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.015080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.015093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.025042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.025121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.025135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.025143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.025150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.025163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.035093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.035147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.035162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.035169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.035179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.035193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.045062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.045116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.045130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.045137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.045144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.045158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.193 [2024-10-09 11:18:53.054937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.193 [2024-10-09 11:18:53.054987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.193 [2024-10-09 11:18:53.055001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.193 [2024-10-09 11:18:53.055008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.193 [2024-10-09 11:18:53.055015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.193 [2024-10-09 11:18:53.055028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.193 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.065056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.065105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.065119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.065126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.065133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.065147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.075091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.075150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.075164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.075171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.075178] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.075191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.085073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.085134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.085160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.085168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.085175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.085195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.095074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.095128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.095153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.095162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.095169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.095187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.105070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.105127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.105152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.105161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.105168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.105187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.115108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.115166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.115184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.115191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.115198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.115213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.125003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.125051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.125065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.125077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.125084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.125098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.135070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.135130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.135145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.135153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.135161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.135178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.145077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.145127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.145141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.145148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.145155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.145169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.155135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.155191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.155208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.155216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.155223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.155238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.165242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.165291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.165305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.165312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.165319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.165333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.175091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.175142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.175156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.175163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.175170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.175184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.194 [2024-10-09 11:18:53.185083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.194 [2024-10-09 11:18:53.185130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.194 [2024-10-09 11:18:53.185143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.194 [2024-10-09 11:18:53.185151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.194 [2024-10-09 11:18:53.185158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.194 [2024-10-09 11:18:53.185172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.194 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.195132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.195195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.195220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.195229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.195236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.195256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.205127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.205180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.205196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.205204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.205211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.205226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.215116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.215166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.215180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.215192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.215199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.215213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.225121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.225175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.225200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.225208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.225216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.225234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.235171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.235236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.235261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.235269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.235277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.235296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.245141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.245195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.245211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.245218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.245225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.245240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.255127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.255173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.255187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.255195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.456 [2024-10-09 11:18:53.255201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.456 [2024-10-09 11:18:53.255216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.456 qpair failed and we were unable to recover it. 00:38:33.456 [2024-10-09 11:18:53.265149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.456 [2024-10-09 11:18:53.265197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.456 [2024-10-09 11:18:53.265212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.456 [2024-10-09 11:18:53.265220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.265227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.265241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.275102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.275156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.275170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.275177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.275184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.275198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.285161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.285215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.285228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.285235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.285242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.285256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.295147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.295199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.295213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.295220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.295227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.295240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.305208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.305284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.305298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.305309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.305316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.305329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.315190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.315246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.315261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.315268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.315275] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.315289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.325183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.325236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.325250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.325257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.325264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.325278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.335164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.335215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.335229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.335236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.335243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.335256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.345178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.345225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.345240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.345248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.345254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.345268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.355215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.355278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.355303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.355312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.355319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.355338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.365196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.365279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.365295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.365303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.365310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.365325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.375142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.375191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.375206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.375213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.375220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.375233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.385175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.385225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.385239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.385246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.385253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.385266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.395229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.395294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.457 [2024-10-09 11:18:53.395307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.457 [2024-10-09 11:18:53.395319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.457 [2024-10-09 11:18:53.395326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.457 [2024-10-09 11:18:53.395339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.457 qpair failed and we were unable to recover it. 00:38:33.457 [2024-10-09 11:18:53.405194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.457 [2024-10-09 11:18:53.405243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.405257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.405265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.405272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.405285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.458 [2024-10-09 11:18:53.415182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.458 [2024-10-09 11:18:53.415230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.415244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.415252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.415258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.415272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.458 [2024-10-09 11:18:53.425075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.458 [2024-10-09 11:18:53.425122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.425136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.425143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.425150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.425164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.458 [2024-10-09 11:18:53.435243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.458 [2024-10-09 11:18:53.435307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.435320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.435328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.435334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.435348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.458 [2024-10-09 11:18:53.445207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.458 [2024-10-09 11:18:53.445265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.445279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.445286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.445293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.445306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.458 [2024-10-09 11:18:53.455071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.458 [2024-10-09 11:18:53.455123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.458 [2024-10-09 11:18:53.455137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.458 [2024-10-09 11:18:53.455144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.458 [2024-10-09 11:18:53.455151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.458 [2024-10-09 11:18:53.455164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.458 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.465092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.465158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.465173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.465180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.465186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.465200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.475264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.475320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.475333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.475340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.475347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.475360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.485233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.485289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.485306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.485313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.485319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.485333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.495216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.495265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.495278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.495285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.495292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.495305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.505127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.505172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.505186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.505193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.505200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.505213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.515131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.515190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.515204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.515211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.515217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.515231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.719 [2024-10-09 11:18:53.525246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.719 [2024-10-09 11:18:53.525296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.719 [2024-10-09 11:18:53.525311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.719 [2024-10-09 11:18:53.525318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.719 [2024-10-09 11:18:53.525325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.719 [2024-10-09 11:18:53.525339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.719 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.535234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.535283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.535296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.535304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.535310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.535324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.545206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.545254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.545267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.545275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.545281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.545295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.555270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.555328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.555342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.555349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.555356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.555370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.565249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.565299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.565313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.565320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.565327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.565341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.575222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.575269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.575285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.575293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.575300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.575313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.585145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.585192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.585206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.585213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.585219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.585233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.595294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.595373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.595386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.595393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.595400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.595413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.605241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.605297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.605310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.605317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.605324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.605337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.615135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.615239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.615253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.615261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.615268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.615285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.625264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.625313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.625326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.625333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.625340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.625353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.635189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.635279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.635294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.635302] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.635308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.635323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.645153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.645222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.645236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.645243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.645250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.645263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.655247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.655295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.655308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.655316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.720 [2024-10-09 11:18:53.655322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.720 [2024-10-09 11:18:53.655335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.720 qpair failed and we were unable to recover it. 00:38:33.720 [2024-10-09 11:18:53.665249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.720 [2024-10-09 11:18:53.665301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.720 [2024-10-09 11:18:53.665318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.720 [2024-10-09 11:18:53.665325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.665332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.665345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.721 [2024-10-09 11:18:53.675320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.721 [2024-10-09 11:18:53.675386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.721 [2024-10-09 11:18:53.675400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.721 [2024-10-09 11:18:53.675407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.675414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.675428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.721 [2024-10-09 11:18:53.685281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.721 [2024-10-09 11:18:53.685350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.721 [2024-10-09 11:18:53.685363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.721 [2024-10-09 11:18:53.685370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.685377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.685390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.721 [2024-10-09 11:18:53.695282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.721 [2024-10-09 11:18:53.695328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.721 [2024-10-09 11:18:53.695341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.721 [2024-10-09 11:18:53.695349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.695355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.695368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.721 [2024-10-09 11:18:53.705325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.721 [2024-10-09 11:18:53.705402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.721 [2024-10-09 11:18:53.705415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.721 [2024-10-09 11:18:53.705422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.705430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.705447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.721 [2024-10-09 11:18:53.715329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.721 [2024-10-09 11:18:53.715412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.721 [2024-10-09 11:18:53.715426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.721 [2024-10-09 11:18:53.715433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.721 [2024-10-09 11:18:53.715441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.721 [2024-10-09 11:18:53.715454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.721 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.725304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.725355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.982 [2024-10-09 11:18:53.725369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.982 [2024-10-09 11:18:53.725376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.982 [2024-10-09 11:18:53.725382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.982 [2024-10-09 11:18:53.725396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.982 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.735293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.735347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.982 [2024-10-09 11:18:53.735362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.982 [2024-10-09 11:18:53.735369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.982 [2024-10-09 11:18:53.735376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.982 [2024-10-09 11:18:53.735389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.982 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.745302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.745348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.982 [2024-10-09 11:18:53.745362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.982 [2024-10-09 11:18:53.745368] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.982 [2024-10-09 11:18:53.745375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.982 [2024-10-09 11:18:53.745388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.982 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.755341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.755397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.982 [2024-10-09 11:18:53.755414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.982 [2024-10-09 11:18:53.755421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.982 [2024-10-09 11:18:53.755427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.982 [2024-10-09 11:18:53.755441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.982 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.765289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.765345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.982 [2024-10-09 11:18:53.765359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.982 [2024-10-09 11:18:53.765366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.982 [2024-10-09 11:18:53.765373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.982 [2024-10-09 11:18:53.765386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.982 qpair failed and we were unable to recover it. 00:38:33.982 [2024-10-09 11:18:53.775220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.982 [2024-10-09 11:18:53.775273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.775286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.775293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.775299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.775313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.785308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.785361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.785374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.785381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.785387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.785401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.795353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.795408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.795422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.795429] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.795436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.795452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.805191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.805269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.805282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.805289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.805296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.805309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.815266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.815319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.815332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.815339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.815346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.815359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.825292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.825341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.825354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.825361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.825367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.825381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.835373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.835429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.835443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.835450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.835456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.835473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.845215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.845263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.845284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.845291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.845298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.845311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.855338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.855388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.855402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.855409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.855416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.855429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.865216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.865266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.865280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.865287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.865294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.865307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.875388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.875445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.875459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.875469] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.875476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.875490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.885381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.885437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.885450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.885457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.885464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.885484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.895316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.895364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.895377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.895384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.895391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.895405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.905354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.905404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.905417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.905424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.983 [2024-10-09 11:18:53.905430] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.983 [2024-10-09 11:18:53.905444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.983 qpair failed and we were unable to recover it. 00:38:33.983 [2024-10-09 11:18:53.915409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.983 [2024-10-09 11:18:53.915485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.983 [2024-10-09 11:18:53.915499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.983 [2024-10-09 11:18:53.915506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.915512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.984 [2024-10-09 11:18:53.915526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.925380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.925436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.925449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.925456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.925463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.984 [2024-10-09 11:18:53.925480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.935378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.935428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.935445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.935452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.935459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.984 [2024-10-09 11:18:53.935475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.945358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.945424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.945438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.945445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.945452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.984 [2024-10-09 11:18:53.945469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.955415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.955477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.955491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.955498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.955505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1520360 00:38:33.984 [2024-10-09 11:18:53.955518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.965378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.965429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.965448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.965454] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.965459] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f10000b90 00:38:33.984 [2024-10-09 11:18:53.965476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.975373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:33.984 [2024-10-09 11:18:53.975418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:33.984 [2024-10-09 11:18:53.975429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:33.984 [2024-10-09 11:18:53.975435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:33.984 [2024-10-09 11:18:53.975443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9f10000b90 00:38:33.984 [2024-10-09 11:18:53.975454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:33.984 qpair failed and we were unable to recover it. 00:38:33.984 [2024-10-09 11:18:53.975587] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:38:33.984 A controller has encountered a failure and is being reset. 00:38:33.984 [2024-10-09 11:18:53.975695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152e260 (9): Bad file descriptor 00:38:34.244 Controller properly reset. 00:38:34.244 Initializing NVMe Controllers 00:38:34.244 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:34.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:34.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:34.244 Initialization complete. Launching workers. 00:38:34.244 Starting thread on core 1 00:38:34.244 Starting thread on core 2 00:38:34.244 Starting thread on core 3 00:38:34.244 Starting thread on core 0 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:34.244 00:38:34.244 real 0m11.728s 00:38:34.244 user 0m21.241s 00:38:34.244 sys 0m3.578s 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.244 ************************************ 00:38:34.244 END TEST nvmf_target_disconnect_tc2 00:38:34.244 ************************************ 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:34.244 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:34.244 rmmod nvme_tcp 00:38:34.504 rmmod nvme_fabrics 00:38:34.504 rmmod nvme_keyring 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2133585 ']' 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2133585 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2133585 ']' 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2133585 00:38:34.504 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2133585 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2133585' 00:38:34.505 killing process with pid 2133585 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2133585 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2133585 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.505 11:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:37.048 00:38:37.048 real 0m21.906s 00:38:37.048 user 0m49.868s 00:38:37.048 sys 0m9.541s 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:37.048 ************************************ 00:38:37.048 END TEST nvmf_target_disconnect 00:38:37.048 ************************************ 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:37.048 00:38:37.048 real 8m1.094s 00:38:37.048 user 17m39.117s 00:38:37.048 sys 2m21.573s 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:37.048 11:18:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.048 ************************************ 00:38:37.048 END TEST nvmf_host 00:38:37.048 ************************************ 00:38:37.048 11:18:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:37.048 11:18:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:37.048 11:18:56 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:37.048 11:18:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:37.048 11:18:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:37.048 11:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.048 ************************************ 00:38:37.048 START TEST nvmf_target_core_interrupt_mode 00:38:37.048 ************************************ 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:37.048 * Looking for test storage... 00:38:37.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.048 --rc genhtml_branch_coverage=1 00:38:37.048 --rc genhtml_function_coverage=1 00:38:37.048 --rc genhtml_legend=1 00:38:37.048 --rc geninfo_all_blocks=1 00:38:37.048 --rc geninfo_unexecuted_blocks=1 00:38:37.048 00:38:37.048 ' 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.048 --rc genhtml_branch_coverage=1 00:38:37.048 --rc genhtml_function_coverage=1 00:38:37.048 --rc genhtml_legend=1 00:38:37.048 --rc geninfo_all_blocks=1 00:38:37.048 --rc geninfo_unexecuted_blocks=1 00:38:37.048 00:38:37.048 ' 00:38:37.048 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:37.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.048 --rc genhtml_branch_coverage=1 00:38:37.048 --rc genhtml_function_coverage=1 00:38:37.048 --rc genhtml_legend=1 00:38:37.048 --rc geninfo_all_blocks=1 00:38:37.048 --rc geninfo_unexecuted_blocks=1 00:38:37.048 00:38:37.048 ' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:37.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.049 --rc genhtml_branch_coverage=1 00:38:37.049 --rc genhtml_function_coverage=1 00:38:37.049 --rc genhtml_legend=1 00:38:37.049 --rc geninfo_all_blocks=1 00:38:37.049 --rc geninfo_unexecuted_blocks=1 00:38:37.049 00:38:37.049 ' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:37.049 ************************************ 00:38:37.049 START TEST nvmf_abort 00:38:37.049 ************************************ 00:38:37.049 11:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:37.311 * Looking for test storage... 00:38:37.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.311 --rc genhtml_branch_coverage=1 00:38:37.311 --rc genhtml_function_coverage=1 00:38:37.311 --rc genhtml_legend=1 00:38:37.311 --rc geninfo_all_blocks=1 00:38:37.311 --rc geninfo_unexecuted_blocks=1 00:38:37.311 00:38:37.311 ' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.311 --rc genhtml_branch_coverage=1 00:38:37.311 --rc genhtml_function_coverage=1 00:38:37.311 --rc genhtml_legend=1 00:38:37.311 --rc geninfo_all_blocks=1 00:38:37.311 --rc geninfo_unexecuted_blocks=1 00:38:37.311 00:38:37.311 ' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.311 --rc genhtml_branch_coverage=1 00:38:37.311 --rc genhtml_function_coverage=1 00:38:37.311 --rc genhtml_legend=1 00:38:37.311 --rc geninfo_all_blocks=1 00:38:37.311 --rc geninfo_unexecuted_blocks=1 00:38:37.311 00:38:37.311 ' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:37.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.311 --rc genhtml_branch_coverage=1 00:38:37.311 --rc genhtml_function_coverage=1 00:38:37.311 --rc genhtml_legend=1 00:38:37.311 --rc geninfo_all_blocks=1 00:38:37.311 --rc geninfo_unexecuted_blocks=1 00:38:37.311 00:38:37.311 ' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.311 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:37.312 11:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:45.448 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:45.449 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:45.449 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:45.449 Found net devices under 0000:31:00.0: cvl_0_0 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:45.449 Found net devices under 0000:31:00.1: cvl_0_1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:45.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:45.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:38:45.449 00:38:45.449 --- 10.0.0.2 ping statistics --- 00:38:45.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:45.449 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:45.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:45.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:38:45.449 00:38:45.449 --- 10.0.0.1 ping statistics --- 00:38:45.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:45.449 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:45.449 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2139250 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2139250 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2139250 ']' 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:45.450 11:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 [2024-10-09 11:19:04.529112] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:45.450 [2024-10-09 11:19:04.530111] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:38:45.450 [2024-10-09 11:19:04.530149] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:45.450 [2024-10-09 11:19:04.666673] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:45.450 [2024-10-09 11:19:04.715928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:45.450 [2024-10-09 11:19:04.733579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:45.450 [2024-10-09 11:19:04.733611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:45.450 [2024-10-09 11:19:04.733622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:45.450 [2024-10-09 11:19:04.733629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:45.450 [2024-10-09 11:19:04.733635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:45.450 [2024-10-09 11:19:04.734941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:45.450 [2024-10-09 11:19:04.735095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.450 [2024-10-09 11:19:04.735096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:45.450 [2024-10-09 11:19:04.783114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:45.450 [2024-10-09 11:19:04.783177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:45.450 [2024-10-09 11:19:04.783673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:45.450 [2024-10-09 11:19:04.784014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 [2024-10-09 11:19:05.363929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 Malloc0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 Delay0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.450 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.711 [2024-10-09 11:19:05.463830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.711 11:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:45.711 [2024-10-09 11:19:05.675098] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:48.252 Initializing NVMe Controllers 00:38:48.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:48.252 controller IO queue size 128 less than required 00:38:48.252 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:48.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:48.253 Initialization complete. Launching workers. 00:38:48.253 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29067 00:38:48.253 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29124, failed to submit 66 00:38:48.253 success 29067, unsuccessful 57, failed 0 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:48.253 rmmod nvme_tcp 00:38:48.253 rmmod nvme_fabrics 00:38:48.253 rmmod nvme_keyring 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2139250 ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2139250 ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2139250' 00:38:48.253 killing process with pid 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2139250 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:48.253 11:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:50.165 00:38:50.165 real 0m13.097s 00:38:50.165 user 0m10.730s 00:38:50.165 sys 0m6.750s 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.165 ************************************ 00:38:50.165 END TEST nvmf_abort 00:38:50.165 ************************************ 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:50.165 ************************************ 00:38:50.165 START TEST nvmf_ns_hotplug_stress 00:38:50.165 ************************************ 00:38:50.165 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:50.428 * Looking for test storage... 00:38:50.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:50.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.428 --rc genhtml_branch_coverage=1 00:38:50.428 --rc genhtml_function_coverage=1 00:38:50.428 --rc genhtml_legend=1 00:38:50.428 --rc geninfo_all_blocks=1 00:38:50.428 --rc geninfo_unexecuted_blocks=1 00:38:50.428 00:38:50.428 ' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:50.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.428 --rc genhtml_branch_coverage=1 00:38:50.428 --rc genhtml_function_coverage=1 00:38:50.428 --rc genhtml_legend=1 00:38:50.428 --rc geninfo_all_blocks=1 00:38:50.428 --rc geninfo_unexecuted_blocks=1 00:38:50.428 00:38:50.428 ' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:50.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.428 --rc genhtml_branch_coverage=1 00:38:50.428 --rc genhtml_function_coverage=1 00:38:50.428 --rc genhtml_legend=1 00:38:50.428 --rc geninfo_all_blocks=1 00:38:50.428 --rc geninfo_unexecuted_blocks=1 00:38:50.428 00:38:50.428 ' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:50.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.428 --rc genhtml_branch_coverage=1 00:38:50.428 --rc genhtml_function_coverage=1 00:38:50.428 --rc genhtml_legend=1 00:38:50.428 --rc geninfo_all_blocks=1 00:38:50.428 --rc geninfo_unexecuted_blocks=1 00:38:50.428 00:38:50.428 ' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.428 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.429 11:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:58.567 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:58.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:58.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:58.568 Found net devices under 0000:31:00.0: cvl_0_0 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:58.568 Found net devices under 0000:31:00.1: cvl_0_1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:58.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:58.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:38:58.568 00:38:58.568 --- 10.0.0.2 ping statistics --- 00:38:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.568 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:58.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:58.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:58.568 00:38:58.568 --- 10.0.0.1 ping statistics --- 00:38:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:58.568 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2144006 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2144006 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2144006 ']' 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:58.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:58.568 11:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:58.568 [2024-10-09 11:19:17.802131] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:58.568 [2024-10-09 11:19:17.803300] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:38:58.568 [2024-10-09 11:19:17.803351] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:58.568 [2024-10-09 11:19:17.944970] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:38:58.568 [2024-10-09 11:19:17.993674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:58.568 [2024-10-09 11:19:18.021131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:58.569 [2024-10-09 11:19:18.021176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:58.569 [2024-10-09 11:19:18.021184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:58.569 [2024-10-09 11:19:18.021191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:58.569 [2024-10-09 11:19:18.021203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:58.569 [2024-10-09 11:19:18.022901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:58.569 [2024-10-09 11:19:18.023064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:58.569 [2024-10-09 11:19:18.023065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:58.569 [2024-10-09 11:19:18.085908] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:58.569 [2024-10-09 11:19:18.085977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:58.569 [2024-10-09 11:19:18.086617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:58.569 [2024-10-09 11:19:18.086915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:58.829 [2024-10-09 11:19:18.795921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.829 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:59.089 11:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:59.349 [2024-10-09 11:19:19.140411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:59.349 11:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:59.349 11:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:59.609 Malloc0 00:38:59.610 11:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:59.870 Delay0 00:38:59.870 11:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.130 11:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:00.130 NULL1 00:39:00.130 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:00.391 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2144577 00:39:00.391 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:00.391 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:00.391 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.652 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.652 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:00.652 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:00.912 true 00:39:00.912 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:00.912 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.172 11:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.172 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:01.172 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:01.432 true 00:39:01.432 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:01.432 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.693 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.953 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:01.953 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:01.953 true 00:39:01.953 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:01.953 11:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.214 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.474 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:02.474 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:02.474 true 00:39:02.474 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:02.474 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.733 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.993 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:02.993 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:02.993 true 00:39:02.993 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:02.993 11:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.253 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.513 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:03.513 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:03.773 true 00:39:03.773 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:03.773 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.773 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.034 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:04.034 11:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:04.294 true 00:39:04.294 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:04.294 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.294 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.554 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:04.554 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:04.814 true 00:39:04.814 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:04.814 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.074 11:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.074 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:05.074 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:05.334 true 00:39:05.334 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:05.334 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.594 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.594 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:05.594 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:05.854 true 00:39:05.854 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:05.854 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.114 11:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.114 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:06.114 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:06.374 true 00:39:06.374 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:06.374 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.634 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.895 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:06.895 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:06.895 true 00:39:06.895 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:06.895 11:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.159 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.420 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:07.420 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:07.420 true 00:39:07.420 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:07.420 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.681 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.940 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:07.941 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:07.941 true 00:39:07.941 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:07.941 11:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.201 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.461 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:08.461 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:08.461 true 00:39:08.461 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:08.461 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.721 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.981 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:08.982 11:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:08.982 true 00:39:09.242 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:09.242 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.242 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.503 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:09.503 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:09.764 true 00:39:09.764 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:09.764 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.764 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.025 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:10.025 11:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:10.285 true 00:39:10.285 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:10.285 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.285 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.595 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:10.595 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:10.904 true 00:39:10.904 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:10.904 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.904 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.165 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:11.165 11:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:11.165 true 00:39:11.425 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:11.425 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.425 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.685 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:11.685 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:11.945 true 00:39:11.945 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:11.945 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.945 11:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.205 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:12.205 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:12.466 true 00:39:12.466 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:12.466 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.466 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.727 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:12.727 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:12.988 true 00:39:12.988 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:12.988 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.248 11:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.248 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:13.248 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:13.507 true 00:39:13.508 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:13.508 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.768 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.768 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:13.768 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:14.028 true 00:39:14.028 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:14.028 11:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.289 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.289 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:14.289 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:14.549 true 00:39:14.549 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:14.549 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.809 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.809 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:14.809 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:15.069 true 00:39:15.070 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:15.070 11:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.329 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.591 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:15.591 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:15.591 true 00:39:15.591 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:15.591 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.851 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.112 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:16.112 11:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:16.112 true 00:39:16.112 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:16.112 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.372 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.632 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:16.632 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:16.632 true 00:39:16.632 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:16.632 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.892 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.151 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:17.151 11:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:17.151 true 00:39:17.411 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:17.411 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.411 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.671 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:17.671 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:17.941 true 00:39:17.941 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:17.941 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.942 11:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.207 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:18.207 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:18.468 true 00:39:18.468 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:18.468 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.468 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.728 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:18.728 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:18.989 true 00:39:18.989 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:18.989 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.250 11:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.250 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:19.250 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:19.511 true 00:39:19.511 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:19.511 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.773 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.773 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:19.773 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:20.034 true 00:39:20.034 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:20.034 11:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.295 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.295 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:20.295 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:20.555 true 00:39:20.555 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:20.555 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.815 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.815 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:20.815 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:21.076 true 00:39:21.076 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:21.076 11:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.336 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.596 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:21.596 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:21.596 true 00:39:21.596 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:21.596 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.857 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.117 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:22.117 11:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:22.117 true 00:39:22.117 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:22.117 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.377 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.638 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:22.638 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:22.638 true 00:39:22.898 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:22.898 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.898 11:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.158 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:23.158 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:23.419 true 00:39:23.419 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:23.419 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.419 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.679 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:23.679 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:23.940 true 00:39:23.940 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:23.940 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.940 11:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.200 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:24.200 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:24.461 true 00:39:24.461 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:24.461 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.461 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.721 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:24.721 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:24.981 true 00:39:24.981 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:24.981 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.981 11:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.241 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:25.241 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:25.502 true 00:39:25.502 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:25.502 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.502 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.762 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:25.762 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:26.022 true 00:39:26.022 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:26.022 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.022 11:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.282 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:26.282 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:26.542 true 00:39:26.542 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:26.542 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.542 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.803 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:39:26.803 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:39:27.063 true 00:39:27.063 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:27.063 11:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.063 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.324 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:39:27.324 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:39:27.584 true 00:39:27.584 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:27.584 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.584 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.844 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:39:27.844 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:39:28.104 true 00:39:28.104 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:28.104 11:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.104 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.365 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:39:28.365 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:39:28.625 true 00:39:28.625 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:28.625 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.625 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.887 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:39:28.887 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:39:29.148 true 00:39:29.148 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:29.148 11:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.409 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.409 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:39:29.409 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:39:29.669 true 00:39:29.669 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:29.669 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.931 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.931 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:39:29.931 11:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:39:30.191 true 00:39:30.191 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:30.191 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.452 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.452 Initializing NVMe Controllers 00:39:30.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:30.452 Controller IO queue size 128, less than required. 00:39:30.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:30.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:30.452 Initialization complete. Launching workers. 00:39:30.452 ======================================================== 00:39:30.452 Latency(us) 00:39:30.452 Device Information : IOPS MiB/s Average min max 00:39:30.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30138.71 14.72 4246.85 1456.39 11248.20 00:39:30.452 ======================================================== 00:39:30.452 Total : 30138.71 14.72 4246.85 1456.39 11248.20 00:39:30.452 00:39:30.713 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:39:30.713 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:39:30.713 true 00:39:30.713 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2144577 00:39:30.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2144577) - No such process 00:39:30.713 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2144577 00:39:30.713 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.975 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.237 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:31.237 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:31.237 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:31.237 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:31.237 11:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:31.237 null0 00:39:31.237 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:31.237 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:31.237 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:31.498 null1 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:31.498 null2 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:31.498 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:31.758 null3 00:39:31.758 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:31.758 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:31.758 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:32.018 null4 00:39:32.018 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:32.019 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:32.019 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:32.019 null5 00:39:32.019 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:32.019 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:32.019 11:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:32.280 null6 00:39:32.280 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:32.280 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:32.280 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:32.542 null7 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2150712 2150714 2150718 2150719 2150722 2150725 2150728 2150731 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.542 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.543 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.804 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.805 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.066 11:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.066 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.328 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.329 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.590 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.852 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.853 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.114 11:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:34.114 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.114 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.114 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:34.375 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:34.635 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.895 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.155 11:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:35.155 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.414 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:35.415 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.675 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:35.935 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.194 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.195 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.195 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.195 11:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:36.195 rmmod nvme_tcp 00:39:36.195 rmmod nvme_fabrics 00:39:36.195 rmmod nvme_keyring 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2144006 ']' 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2144006 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2144006 ']' 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2144006 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:39:36.195 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2144006 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2144006' 00:39:36.454 killing process with pid 2144006 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2144006 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2144006 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.454 11:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.994 00:39:38.994 real 0m48.334s 00:39:38.994 user 2m59.417s 00:39:38.994 sys 0m23.280s 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:38.994 ************************************ 00:39:38.994 END TEST nvmf_ns_hotplug_stress 00:39:38.994 ************************************ 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:38.994 ************************************ 00:39:38.994 START TEST nvmf_delete_subsystem 00:39:38.994 ************************************ 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:38.994 * Looking for test storage... 00:39:38.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.994 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:38.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.995 --rc genhtml_branch_coverage=1 00:39:38.995 --rc genhtml_function_coverage=1 00:39:38.995 --rc genhtml_legend=1 00:39:38.995 --rc geninfo_all_blocks=1 00:39:38.995 --rc geninfo_unexecuted_blocks=1 00:39:38.995 00:39:38.995 ' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:38.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.995 --rc genhtml_branch_coverage=1 00:39:38.995 --rc genhtml_function_coverage=1 00:39:38.995 --rc genhtml_legend=1 00:39:38.995 --rc geninfo_all_blocks=1 00:39:38.995 --rc geninfo_unexecuted_blocks=1 00:39:38.995 00:39:38.995 ' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:38.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.995 --rc genhtml_branch_coverage=1 00:39:38.995 --rc genhtml_function_coverage=1 00:39:38.995 --rc genhtml_legend=1 00:39:38.995 --rc geninfo_all_blocks=1 00:39:38.995 --rc geninfo_unexecuted_blocks=1 00:39:38.995 00:39:38.995 ' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:38.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.995 --rc genhtml_branch_coverage=1 00:39:38.995 --rc genhtml_function_coverage=1 00:39:38.995 --rc genhtml_legend=1 00:39:38.995 --rc geninfo_all_blocks=1 00:39:38.995 --rc geninfo_unexecuted_blocks=1 00:39:38.995 00:39:38.995 ' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.995 11:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:47.133 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:47.133 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:47.133 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:47.134 Found net devices under 0000:31:00.0: cvl_0_0 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:47.134 Found net devices under 0000:31:00.1: cvl_0_1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:47.134 11:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:47.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:47.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:39:47.134 00:39:47.134 --- 10.0.0.2 ping statistics --- 00:39:47.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:47.134 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:47.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:47.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:39:47.134 00:39:47.134 --- 10.0.0.1 ping statistics --- 00:39:47.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:47.134 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2155784 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2155784 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2155784 ']' 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:47.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.134 [2024-10-09 11:20:06.138584] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:47.134 [2024-10-09 11:20:06.139755] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:39:47.134 [2024-10-09 11:20:06.139806] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:47.134 [2024-10-09 11:20:06.279915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:39:47.134 [2024-10-09 11:20:06.310708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:47.134 [2024-10-09 11:20:06.327528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:47.134 [2024-10-09 11:20:06.327558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:47.134 [2024-10-09 11:20:06.327566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:47.134 [2024-10-09 11:20:06.327572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:47.134 [2024-10-09 11:20:06.327578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:47.134 [2024-10-09 11:20:06.328812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.134 [2024-10-09 11:20:06.328898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.134 [2024-10-09 11:20:06.377013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:47.134 [2024-10-09 11:20:06.377553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:47.134 [2024-10-09 11:20:06.377884] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.134 [2024-10-09 11:20:06.969419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.134 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.135 11:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.135 [2024-10-09 11:20:06.998203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.135 NULL1 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.135 Delay0 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2156124 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:47.135 11:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:47.395 [2024-10-09 11:20:07.188451] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:49.310 11:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:49.310 11:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:49.310 11:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 starting I/O failed: -6 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 [2024-10-09 11:20:09.387002] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847d30 is same with the state(6) to be set 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Write completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.571 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 starting I/O failed: -6 00:39:49.572 [2024-10-09 11:20:09.391403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc97400d450 is same with the state(6) to be set 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Write completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:49.572 Read completed with error (sct=0, sc=8) 00:39:50.512 [2024-10-09 11:20:10.370527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84ce20 is same with the state(6) to be set 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.512 Read completed with error (sct=0, sc=8) 00:39:50.512 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 [2024-10-09 11:20:10.388162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x847f10 is same with the state(6) to be set 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 [2024-10-09 11:20:10.388315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8482d0 is same with the state(6) to be set 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 [2024-10-09 11:20:10.390113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc97400d780 is same with the state(6) to be set 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Write completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 Read completed with error (sct=0, sc=8) 00:39:50.513 [2024-10-09 11:20:10.390220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc97400cfe0 is same with the state(6) to be set 00:39:50.513 Initializing NVMe Controllers 00:39:50.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:50.513 Controller IO queue size 128, less than required. 00:39:50.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:50.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:50.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:50.513 Initialization complete. Launching workers. 00:39:50.513 ======================================================== 00:39:50.513 Latency(us) 00:39:50.513 Device Information : IOPS MiB/s Average min max 00:39:50.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.27 0.09 879520.81 262.86 1007065.20 00:39:50.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.86 0.08 927997.14 275.82 1010108.41 00:39:50.513 ======================================================== 00:39:50.513 Total : 333.13 0.16 902201.07 262.86 1010108.41 00:39:50.513 00:39:50.513 [2024-10-09 11:20:10.390849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84ce20 (9): Bad file descriptor 00:39:50.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:50.513 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.513 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:50.513 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2156124 00:39:50.513 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2156124 00:39:51.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2156124) - No such process 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2156124 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2156124 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2156124 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:51.083 [2024-10-09 11:20:10.925925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2156795 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:51.083 11:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.343 [2024-10-09 11:20:11.090799] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:51.604 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.604 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:51.604 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.176 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.176 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:52.176 11:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:52.746 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:52.746 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:52.746 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:53.006 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:53.006 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:53.006 11:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:53.578 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:53.578 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:53.578 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:54.147 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:54.148 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:54.148 11:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:54.408 Initializing NVMe Controllers 00:39:54.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:54.408 Controller IO queue size 128, less than required. 00:39:54.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:54.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:54.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:54.408 Initialization complete. Launching workers. 00:39:54.408 ======================================================== 00:39:54.408 Latency(us) 00:39:54.408 Device Information : IOPS MiB/s Average min max 00:39:54.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002144.94 1000050.94 1006222.03 00:39:54.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004241.91 1000250.39 1041470.68 00:39:54.408 ======================================================== 00:39:54.408 Total : 256.00 0.12 1003193.43 1000050.94 1041470.68 00:39:54.408 00:39:54.408 [2024-10-09 11:20:14.214246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbae8e0 is same with the state(6) to be set 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2156795 00:39:54.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2156795) - No such process 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2156795 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:54.668 rmmod nvme_tcp 00:39:54.668 rmmod nvme_fabrics 00:39:54.668 rmmod nvme_keyring 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2155784 ']' 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2155784 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2155784 ']' 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2155784 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2155784 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2155784' 00:39:54.668 killing process with pid 2155784 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2155784 00:39:54.668 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2155784 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.929 11:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.839 00:39:56.839 real 0m18.261s 00:39:56.839 user 0m26.496s 00:39:56.839 sys 0m7.459s 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:56.839 ************************************ 00:39:56.839 END TEST nvmf_delete_subsystem 00:39:56.839 ************************************ 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:56.839 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:57.099 ************************************ 00:39:57.099 START TEST nvmf_host_management 00:39:57.099 ************************************ 00:39:57.100 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:57.100 * Looking for test storage... 00:39:57.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:57.100 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:57.100 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:39:57.100 11:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:57.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.100 --rc genhtml_branch_coverage=1 00:39:57.100 --rc genhtml_function_coverage=1 00:39:57.100 --rc genhtml_legend=1 00:39:57.100 --rc geninfo_all_blocks=1 00:39:57.100 --rc geninfo_unexecuted_blocks=1 00:39:57.100 00:39:57.100 ' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:57.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.100 --rc genhtml_branch_coverage=1 00:39:57.100 --rc genhtml_function_coverage=1 00:39:57.100 --rc genhtml_legend=1 00:39:57.100 --rc geninfo_all_blocks=1 00:39:57.100 --rc geninfo_unexecuted_blocks=1 00:39:57.100 00:39:57.100 ' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:57.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.100 --rc genhtml_branch_coverage=1 00:39:57.100 --rc genhtml_function_coverage=1 00:39:57.100 --rc genhtml_legend=1 00:39:57.100 --rc geninfo_all_blocks=1 00:39:57.100 --rc geninfo_unexecuted_blocks=1 00:39:57.100 00:39:57.100 ' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:57.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.100 --rc genhtml_branch_coverage=1 00:39:57.100 --rc genhtml_function_coverage=1 00:39:57.100 --rc genhtml_legend=1 00:39:57.100 --rc geninfo_all_blocks=1 00:39:57.100 --rc geninfo_unexecuted_blocks=1 00:39:57.100 00:39:57.100 ' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:57.100 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:57.101 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:57.101 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:57.101 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:57.391 11:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:05.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:05.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:05.589 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:05.590 Found net devices under 0000:31:00.0: cvl_0_0 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:05.590 Found net devices under 0000:31:00.1: cvl_0_1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:05.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:40:05.590 00:40:05.590 --- 10.0.0.2 ping statistics --- 00:40:05.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.590 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:05.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:40:05.590 00:40:05.590 --- 10.0.0.1 ping statistics --- 00:40:05.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.590 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2161723 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2161723 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2161723 ']' 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:05.590 11:20:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.590 [2024-10-09 11:20:24.520836] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:05.590 [2024-10-09 11:20:24.521997] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:05.590 [2024-10-09 11:20:24.522048] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.590 [2024-10-09 11:20:24.664294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:05.590 [2024-10-09 11:20:24.712767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:05.590 [2024-10-09 11:20:24.741297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.590 [2024-10-09 11:20:24.741343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.590 [2024-10-09 11:20:24.741351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.590 [2024-10-09 11:20:24.741358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.590 [2024-10-09 11:20:24.741365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.590 [2024-10-09 11:20:24.743264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:05.590 [2024-10-09 11:20:24.743428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:05.590 [2024-10-09 11:20:24.743591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:05.590 [2024-10-09 11:20:24.743688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.590 [2024-10-09 11:20:24.802400] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:05.590 [2024-10-09 11:20:24.803047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:05.590 [2024-10-09 11:20:24.804108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:05.590 [2024-10-09 11:20:24.804256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:05.590 [2024-10-09 11:20:24.804408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:05.590 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.591 [2024-10-09 11:20:25.372572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.591 Malloc0 00:40:05.591 [2024-10-09 11:20:25.460808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2161919 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2161919 /var/tmp/bdevperf.sock 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2161919 ']' 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:05.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:05.591 { 00:40:05.591 "params": { 00:40:05.591 "name": "Nvme$subsystem", 00:40:05.591 "trtype": "$TEST_TRANSPORT", 00:40:05.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:05.591 "adrfam": "ipv4", 00:40:05.591 "trsvcid": "$NVMF_PORT", 00:40:05.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:05.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:05.591 "hdgst": ${hdgst:-false}, 00:40:05.591 "ddgst": ${ddgst:-false} 00:40:05.591 }, 00:40:05.591 "method": "bdev_nvme_attach_controller" 00:40:05.591 } 00:40:05.591 EOF 00:40:05.591 )") 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:40:05.591 11:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:05.591 "params": { 00:40:05.591 "name": "Nvme0", 00:40:05.591 "trtype": "tcp", 00:40:05.591 "traddr": "10.0.0.2", 00:40:05.591 "adrfam": "ipv4", 00:40:05.591 "trsvcid": "4420", 00:40:05.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:05.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:05.591 "hdgst": false, 00:40:05.591 "ddgst": false 00:40:05.591 }, 00:40:05.591 "method": "bdev_nvme_attach_controller" 00:40:05.591 }' 00:40:05.591 [2024-10-09 11:20:25.572067] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:05.591 [2024-10-09 11:20:25.572156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2161919 ] 00:40:05.852 [2024-10-09 11:20:25.704659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:05.852 [2024-10-09 11:20:25.736059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.852 [2024-10-09 11:20:25.754341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.120 Running I/O for 10 seconds... 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:06.380 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:06.643 [2024-10-09 11:20:26.428705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.428818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178dca0 is same with the state(6) to be set 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:06.643 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:06.643 [2024-10-09 11:20:26.443739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:06.643 [2024-10-09 11:20:26.443777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.643 [2024-10-09 11:20:26.443788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:06.643 [2024-10-09 11:20:26.443801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.643 [2024-10-09 11:20:26.443810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:06.643 [2024-10-09 11:20:26.443817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.643 [2024-10-09 11:20:26.443827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:06.643 [2024-10-09 11:20:26.443834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.643 [2024-10-09 11:20:26.443842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d7c40 is same with the state(6) to be set 00:40:06.643 [2024-10-09 11:20:26.444430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.444982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.444992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.644 [2024-10-09 11:20:26.445167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.644 [2024-10-09 11:20:26.445175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:1 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:06.645 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 [2024-10-09 11:20:26.445559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:06.645 [2024-10-09 11:20:26.445566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:06.645 11:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:06.645 [2024-10-09 11:20:26.445617] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1af09a0 was disconnected and freed. reset controller. 00:40:06.645 [2024-10-09 11:20:26.446792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:06.645 task offset: 90112 on job bdev=Nvme0n1 fails 00:40:06.645 00:40:06.645 Latency(us) 00:40:06.645 [2024-10-09T09:20:26.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.645 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:06.645 Job: Nvme0n1 ended in about 0.52 seconds with error 00:40:06.645 Verification LBA range: start 0x0 length 0x400 00:40:06.645 Nvme0n1 : 0.52 1361.39 85.09 123.76 0.00 41990.73 1608.02 36129.10 00:40:06.645 [2024-10-09T09:20:26.647Z] =================================================================================================================== 00:40:06.645 [2024-10-09T09:20:26.647Z] Total : 1361.39 85.09 123.76 0.00 41990.73 1608.02 36129.10 00:40:06.645 [2024-10-09 11:20:26.448777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:06.645 [2024-10-09 11:20:26.448798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d7c40 (9): Bad file descriptor 00:40:06.645 [2024-10-09 11:20:26.541691] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2161919 00:40:07.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2161919) - No such process 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:07.587 { 00:40:07.587 "params": { 00:40:07.587 "name": "Nvme$subsystem", 00:40:07.587 "trtype": "$TEST_TRANSPORT", 00:40:07.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.587 "adrfam": "ipv4", 00:40:07.587 "trsvcid": "$NVMF_PORT", 00:40:07.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.587 "hdgst": ${hdgst:-false}, 00:40:07.587 "ddgst": ${ddgst:-false} 00:40:07.587 }, 00:40:07.587 "method": "bdev_nvme_attach_controller" 00:40:07.587 } 00:40:07.587 EOF 00:40:07.587 )") 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:40:07.587 11:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:07.587 "params": { 00:40:07.587 "name": "Nvme0", 00:40:07.587 "trtype": "tcp", 00:40:07.587 "traddr": "10.0.0.2", 00:40:07.587 "adrfam": "ipv4", 00:40:07.587 "trsvcid": "4420", 00:40:07.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:07.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:07.587 "hdgst": false, 00:40:07.587 "ddgst": false 00:40:07.587 }, 00:40:07.587 "method": "bdev_nvme_attach_controller" 00:40:07.587 }' 00:40:07.587 [2024-10-09 11:20:27.506025] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:07.587 [2024-10-09 11:20:27.506081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2162262 ] 00:40:07.848 [2024-10-09 11:20:27.636537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:07.848 [2024-10-09 11:20:27.668140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.848 [2024-10-09 11:20:27.685442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.109 Running I/O for 1 seconds... 00:40:09.051 1606.00 IOPS, 100.38 MiB/s 00:40:09.051 Latency(us) 00:40:09.051 [2024-10-09T09:20:29.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.051 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:09.051 Verification LBA range: start 0x0 length 0x400 00:40:09.051 Nvme0n1 : 1.03 1626.11 101.63 0.00 0.00 38566.81 3859.24 36567.03 00:40:09.051 [2024-10-09T09:20:29.053Z] =================================================================================================================== 00:40:09.051 [2024-10-09T09:20:29.053Z] Total : 1626.11 101.63 0.00 0.00 38566.81 3859.24 36567.03 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:09.311 rmmod nvme_tcp 00:40:09.311 rmmod nvme_fabrics 00:40:09.311 rmmod nvme_keyring 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2161723 ']' 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2161723 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2161723 ']' 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2161723 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161723 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161723' 00:40:09.311 killing process with pid 2161723 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2161723 00:40:09.311 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2161723 00:40:09.572 [2024-10-09 11:20:29.314861] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.572 11:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:11.484 00:40:11.484 real 0m14.546s 00:40:11.484 user 0m19.247s 00:40:11.484 sys 0m7.391s 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.484 ************************************ 00:40:11.484 END TEST nvmf_host_management 00:40:11.484 ************************************ 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:11.484 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:11.745 ************************************ 00:40:11.745 START TEST nvmf_lvol 00:40:11.745 ************************************ 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:11.745 * Looking for test storage... 00:40:11.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.745 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.746 --rc genhtml_branch_coverage=1 00:40:11.746 --rc genhtml_function_coverage=1 00:40:11.746 --rc genhtml_legend=1 00:40:11.746 --rc geninfo_all_blocks=1 00:40:11.746 --rc geninfo_unexecuted_blocks=1 00:40:11.746 00:40:11.746 ' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.746 --rc genhtml_branch_coverage=1 00:40:11.746 --rc genhtml_function_coverage=1 00:40:11.746 --rc genhtml_legend=1 00:40:11.746 --rc geninfo_all_blocks=1 00:40:11.746 --rc geninfo_unexecuted_blocks=1 00:40:11.746 00:40:11.746 ' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.746 --rc genhtml_branch_coverage=1 00:40:11.746 --rc genhtml_function_coverage=1 00:40:11.746 --rc genhtml_legend=1 00:40:11.746 --rc geninfo_all_blocks=1 00:40:11.746 --rc geninfo_unexecuted_blocks=1 00:40:11.746 00:40:11.746 ' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.746 --rc genhtml_branch_coverage=1 00:40:11.746 --rc genhtml_function_coverage=1 00:40:11.746 --rc genhtml_legend=1 00:40:11.746 --rc geninfo_all_blocks=1 00:40:11.746 --rc geninfo_unexecuted_blocks=1 00:40:11.746 00:40:11.746 ' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:11.746 11:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:19.884 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:19.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:19.884 Found net devices under 0000:31:00.0: cvl_0_0 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:19.884 Found net devices under 0000:31:00.1: cvl_0_1 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.884 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:40:19.885 00:40:19.885 --- 10.0.0.2 ping statistics --- 00:40:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.885 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:40:19.885 00:40:19.885 --- 10.0.0.1 ping statistics --- 00:40:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.885 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2166839 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2166839 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2166839 ']' 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:19.885 11:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:19.885 [2024-10-09 11:20:38.893843] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.885 [2024-10-09 11:20:38.894846] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:19.885 [2024-10-09 11:20:38.894883] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.885 [2024-10-09 11:20:39.030716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:19.885 [2024-10-09 11:20:39.061710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:19.885 [2024-10-09 11:20:39.079168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.885 [2024-10-09 11:20:39.079198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.885 [2024-10-09 11:20:39.079206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.885 [2024-10-09 11:20:39.079212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.885 [2024-10-09 11:20:39.079218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.885 [2024-10-09 11:20:39.080503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.885 [2024-10-09 11:20:39.080567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:19.885 [2024-10-09 11:20:39.080570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.885 [2024-10-09 11:20:39.128644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.885 [2024-10-09 11:20:39.129001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:19.885 [2024-10-09 11:20:39.129438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:19.885 [2024-10-09 11:20:39.129705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.885 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:19.885 [2024-10-09 11:20:39.869234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.145 11:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:20.145 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:20.145 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:20.405 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:20.405 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:20.666 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:20.666 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8d052b1f-7e69-4f7d-9bec-d61fdef1828e 00:40:20.666 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d052b1f-7e69-4f7d-9bec-d61fdef1828e lvol 20 00:40:20.926 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3ca4d15a-02e9-489c-8da6-5a49fe58e9a0 00:40:20.926 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:21.186 11:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3ca4d15a-02e9-489c-8da6-5a49fe58e9a0 00:40:21.186 11:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:21.447 [2024-10-09 11:20:41.281346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:21.447 11:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:21.707 11:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2167366 00:40:21.707 11:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:21.707 11:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:22.648 11:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3ca4d15a-02e9-489c-8da6-5a49fe58e9a0 MY_SNAPSHOT 00:40:22.908 11:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=49c3ad88-ebca-4957-a45a-f51bebdc2fe0 00:40:22.908 11:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3ca4d15a-02e9-489c-8da6-5a49fe58e9a0 30 00:40:23.168 11:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 49c3ad88-ebca-4957-a45a-f51bebdc2fe0 MY_CLONE 00:40:23.429 11:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8092087f-5914-47cd-85a2-4c90e99f08d1 00:40:23.429 11:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8092087f-5914-47cd-85a2-4c90e99f08d1 00:40:23.689 11:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2167366 00:40:33.688 Initializing NVMe Controllers 00:40:33.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:33.688 Controller IO queue size 128, less than required. 00:40:33.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:33.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:33.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:33.688 Initialization complete. Launching workers. 00:40:33.688 ======================================================== 00:40:33.688 Latency(us) 00:40:33.688 Device Information : IOPS MiB/s Average min max 00:40:33.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12258.69 47.89 10449.17 2145.99 73497.56 00:40:33.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15907.78 62.14 8048.24 713.28 54785.64 00:40:33.688 ======================================================== 00:40:33.688 Total : 28166.46 110.03 9093.18 713.28 73497.56 00:40:33.688 00:40:33.688 11:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:33.688 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3ca4d15a-02e9-489c-8da6-5a49fe58e9a0 00:40:33.688 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d052b1f-7e69-4f7d-9bec-d61fdef1828e 00:40:33.688 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:33.688 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:33.689 rmmod nvme_tcp 00:40:33.689 rmmod nvme_fabrics 00:40:33.689 rmmod nvme_keyring 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2166839 ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2166839 ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2166839' 00:40:33.689 killing process with pid 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2166839 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:33.689 11:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:35.074 00:40:35.074 real 0m23.320s 00:40:35.074 user 0m55.366s 00:40:35.074 sys 0m10.389s 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:35.074 ************************************ 00:40:35.074 END TEST nvmf_lvol 00:40:35.074 ************************************ 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:35.074 ************************************ 00:40:35.074 START TEST nvmf_lvs_grow 00:40:35.074 ************************************ 00:40:35.074 11:20:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:35.074 * Looking for test storage... 00:40:35.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:35.074 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:35.074 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:40:35.074 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:35.335 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.336 --rc genhtml_branch_coverage=1 00:40:35.336 --rc genhtml_function_coverage=1 00:40:35.336 --rc genhtml_legend=1 00:40:35.336 --rc geninfo_all_blocks=1 00:40:35.336 --rc geninfo_unexecuted_blocks=1 00:40:35.336 00:40:35.336 ' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.336 --rc genhtml_branch_coverage=1 00:40:35.336 --rc genhtml_function_coverage=1 00:40:35.336 --rc genhtml_legend=1 00:40:35.336 --rc geninfo_all_blocks=1 00:40:35.336 --rc geninfo_unexecuted_blocks=1 00:40:35.336 00:40:35.336 ' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.336 --rc genhtml_branch_coverage=1 00:40:35.336 --rc genhtml_function_coverage=1 00:40:35.336 --rc genhtml_legend=1 00:40:35.336 --rc geninfo_all_blocks=1 00:40:35.336 --rc geninfo_unexecuted_blocks=1 00:40:35.336 00:40:35.336 ' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.336 --rc genhtml_branch_coverage=1 00:40:35.336 --rc genhtml_function_coverage=1 00:40:35.336 --rc genhtml_legend=1 00:40:35.336 --rc geninfo_all_blocks=1 00:40:35.336 --rc geninfo_unexecuted_blocks=1 00:40:35.336 00:40:35.336 ' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:35.336 11:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:43.472 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:43.472 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:43.472 Found net devices under 0000:31:00.0: cvl_0_0 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:43.472 Found net devices under 0000:31:00.1: cvl_0_1 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:43.472 11:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:43.472 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:43.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:43.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:40:43.472 00:40:43.472 --- 10.0.0.2 ping statistics --- 00:40:43.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.472 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:43.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:43.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:40:43.473 00:40:43.473 --- 10.0.0.1 ping statistics --- 00:40:43.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:43.473 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2173726 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2173726 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2173726 ']' 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:43.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:43.473 11:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:43.473 [2024-10-09 11:21:02.400898] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:43.473 [2024-10-09 11:21:02.402093] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:43.473 [2024-10-09 11:21:02.402146] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:43.473 [2024-10-09 11:21:02.543191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:43.473 [2024-10-09 11:21:02.576408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.473 [2024-10-09 11:21:02.598416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:43.473 [2024-10-09 11:21:02.598459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:43.473 [2024-10-09 11:21:02.598475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:43.473 [2024-10-09 11:21:02.598482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:43.473 [2024-10-09 11:21:02.598489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:43.473 [2024-10-09 11:21:02.599116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.473 [2024-10-09 11:21:02.650727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:43.473 [2024-10-09 11:21:02.650979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:43.473 [2024-10-09 11:21:03.419618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:43.473 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:43.473 ************************************ 00:40:43.473 START TEST lvs_grow_clean 00:40:43.473 ************************************ 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:43.733 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:43.993 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:43.994 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:43.994 11:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 928c42a9-43a8-4626-99ee-af1c54b84dbf lvol 150 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:44.254 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:44.514 [2024-10-09 11:21:04.347507] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:44.514 [2024-10-09 11:21:04.347581] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:44.514 true 00:40:44.514 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:44.514 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:44.774 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:44.774 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:44.774 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 00:40:45.034 11:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:45.034 [2024-10-09 11:21:05.007807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:45.034 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2174253 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2174253 /var/tmp/bdevperf.sock 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2174253 ']' 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:45.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:45.294 11:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:45.294 [2024-10-09 11:21:05.248969] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:40:45.294 [2024-10-09 11:21:05.249037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174253 ] 00:40:45.555 [2024-10-09 11:21:05.382848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:40:45.555 [2024-10-09 11:21:05.433310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.555 [2024-10-09 11:21:05.461126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.135 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:46.135 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:40:46.135 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:46.397 Nvme0n1 00:40:46.397 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:46.657 [ 00:40:46.657 { 00:40:46.657 "name": "Nvme0n1", 00:40:46.657 "aliases": [ 00:40:46.657 "5f2b9151-8b2a-41f2-aeea-9c12eca1ef44" 00:40:46.657 ], 00:40:46.657 "product_name": "NVMe disk", 00:40:46.657 "block_size": 4096, 00:40:46.657 "num_blocks": 38912, 00:40:46.657 "uuid": "5f2b9151-8b2a-41f2-aeea-9c12eca1ef44", 00:40:46.657 "numa_id": 0, 00:40:46.657 "assigned_rate_limits": { 00:40:46.657 "rw_ios_per_sec": 0, 00:40:46.657 "rw_mbytes_per_sec": 0, 00:40:46.657 "r_mbytes_per_sec": 0, 00:40:46.657 "w_mbytes_per_sec": 0 00:40:46.657 }, 00:40:46.657 "claimed": false, 00:40:46.657 "zoned": false, 00:40:46.657 "supported_io_types": { 00:40:46.657 "read": true, 00:40:46.657 "write": true, 00:40:46.657 "unmap": true, 00:40:46.657 "flush": true, 00:40:46.657 "reset": true, 00:40:46.657 "nvme_admin": true, 00:40:46.657 "nvme_io": true, 00:40:46.657 "nvme_io_md": false, 00:40:46.657 "write_zeroes": true, 00:40:46.657 "zcopy": false, 00:40:46.657 "get_zone_info": false, 00:40:46.657 "zone_management": false, 00:40:46.657 "zone_append": false, 00:40:46.657 "compare": true, 00:40:46.657 "compare_and_write": true, 00:40:46.657 "abort": true, 00:40:46.657 "seek_hole": false, 00:40:46.657 "seek_data": false, 00:40:46.657 "copy": true, 00:40:46.657 "nvme_iov_md": false 00:40:46.657 }, 00:40:46.657 "memory_domains": [ 00:40:46.657 { 00:40:46.657 "dma_device_id": "system", 00:40:46.657 "dma_device_type": 1 00:40:46.657 } 00:40:46.657 ], 00:40:46.657 "driver_specific": { 00:40:46.657 "nvme": [ 00:40:46.657 { 00:40:46.657 "trid": { 00:40:46.657 "trtype": "TCP", 00:40:46.657 "adrfam": "IPv4", 00:40:46.657 "traddr": "10.0.0.2", 00:40:46.657 "trsvcid": "4420", 00:40:46.657 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:46.657 }, 00:40:46.657 "ctrlr_data": { 00:40:46.657 "cntlid": 1, 00:40:46.657 "vendor_id": "0x8086", 00:40:46.657 "model_number": "SPDK bdev Controller", 00:40:46.658 "serial_number": "SPDK0", 00:40:46.658 "firmware_revision": "25.01", 00:40:46.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:46.658 "oacs": { 00:40:46.658 "security": 0, 00:40:46.658 "format": 0, 00:40:46.658 "firmware": 0, 00:40:46.658 "ns_manage": 0 00:40:46.658 }, 00:40:46.658 "multi_ctrlr": true, 00:40:46.658 "ana_reporting": false 00:40:46.658 }, 00:40:46.658 "vs": { 00:40:46.658 "nvme_version": "1.3" 00:40:46.658 }, 00:40:46.658 "ns_data": { 00:40:46.658 "id": 1, 00:40:46.658 "can_share": true 00:40:46.658 } 00:40:46.658 } 00:40:46.658 ], 00:40:46.658 "mp_policy": "active_passive" 00:40:46.658 } 00:40:46.658 } 00:40:46.658 ] 00:40:46.658 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2174588 00:40:46.658 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:46.658 11:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:46.658 Running I/O for 10 seconds... 00:40:48.037 Latency(us) 00:40:48.037 [2024-10-09T09:21:08.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:48.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.037 Nvme0n1 : 1.00 17811.00 69.57 0.00 0.00 0.00 0.00 0.00 00:40:48.037 [2024-10-09T09:21:08.039Z] =================================================================================================================== 00:40:48.037 [2024-10-09T09:21:08.039Z] Total : 17811.00 69.57 0.00 0.00 0.00 0.00 0.00 00:40:48.037 00:40:48.606 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:48.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.866 Nvme0n1 : 2.00 17834.00 69.66 0.00 0.00 0.00 0.00 0.00 00:40:48.866 [2024-10-09T09:21:08.868Z] =================================================================================================================== 00:40:48.866 [2024-10-09T09:21:08.868Z] Total : 17834.00 69.66 0.00 0.00 0.00 0.00 0.00 00:40:48.866 00:40:48.866 true 00:40:48.866 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:48.866 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:49.127 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:49.127 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:49.127 11:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2174588 00:40:49.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.698 Nvme0n1 : 3.00 17852.33 69.74 0.00 0.00 0.00 0.00 0.00 00:40:49.698 [2024-10-09T09:21:09.700Z] =================================================================================================================== 00:40:49.698 [2024-10-09T09:21:09.700Z] Total : 17852.33 69.74 0.00 0.00 0.00 0.00 0.00 00:40:49.698 00:40:50.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.639 Nvme0n1 : 4.00 17860.75 69.77 0.00 0.00 0.00 0.00 0.00 00:40:50.639 [2024-10-09T09:21:10.641Z] =================================================================================================================== 00:40:50.639 [2024-10-09T09:21:10.641Z] Total : 17860.75 69.77 0.00 0.00 0.00 0.00 0.00 00:40:50.639 00:40:52.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.023 Nvme0n1 : 5.00 17885.60 69.87 0.00 0.00 0.00 0.00 0.00 00:40:52.023 [2024-10-09T09:21:12.025Z] =================================================================================================================== 00:40:52.023 [2024-10-09T09:21:12.026Z] Total : 17885.60 69.87 0.00 0.00 0.00 0.00 0.00 00:40:52.024 00:40:52.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.964 Nvme0n1 : 6.00 17901.83 69.93 0.00 0.00 0.00 0.00 0.00 00:40:52.964 [2024-10-09T09:21:12.966Z] =================================================================================================================== 00:40:52.964 [2024-10-09T09:21:12.966Z] Total : 17901.83 69.93 0.00 0.00 0.00 0.00 0.00 00:40:52.964 00:40:53.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.905 Nvme0n1 : 7.00 17918.57 69.99 0.00 0.00 0.00 0.00 0.00 00:40:53.905 [2024-10-09T09:21:13.907Z] =================================================================================================================== 00:40:53.905 [2024-10-09T09:21:13.907Z] Total : 17918.57 69.99 0.00 0.00 0.00 0.00 0.00 00:40:53.905 00:40:54.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.845 Nvme0n1 : 8.00 17930.50 70.04 0.00 0.00 0.00 0.00 0.00 00:40:54.845 [2024-10-09T09:21:14.847Z] =================================================================================================================== 00:40:54.845 [2024-10-09T09:21:14.847Z] Total : 17930.50 70.04 0.00 0.00 0.00 0.00 0.00 00:40:54.845 00:40:55.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.786 Nvme0n1 : 9.00 17943.56 70.09 0.00 0.00 0.00 0.00 0.00 00:40:55.786 [2024-10-09T09:21:15.788Z] =================================================================================================================== 00:40:55.786 [2024-10-09T09:21:15.788Z] Total : 17943.56 70.09 0.00 0.00 0.00 0.00 0.00 00:40:55.786 00:40:56.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.726 Nvme0n1 : 10.00 17954.00 70.13 0.00 0.00 0.00 0.00 0.00 00:40:56.726 [2024-10-09T09:21:16.728Z] =================================================================================================================== 00:40:56.726 [2024-10-09T09:21:16.728Z] Total : 17954.00 70.13 0.00 0.00 0.00 0.00 0.00 00:40:56.726 00:40:56.726 00:40:56.726 Latency(us) 00:40:56.726 [2024-10-09T09:21:16.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.726 Nvme0n1 : 10.00 17952.82 70.13 0.00 0.00 7126.00 2107.53 12754.67 00:40:56.726 [2024-10-09T09:21:16.728Z] =================================================================================================================== 00:40:56.726 [2024-10-09T09:21:16.728Z] Total : 17952.82 70.13 0.00 0.00 7126.00 2107.53 12754.67 00:40:56.726 { 00:40:56.726 "results": [ 00:40:56.726 { 00:40:56.726 "job": "Nvme0n1", 00:40:56.726 "core_mask": "0x2", 00:40:56.726 "workload": "randwrite", 00:40:56.726 "status": "finished", 00:40:56.726 "queue_depth": 128, 00:40:56.726 "io_size": 4096, 00:40:56.726 "runtime": 10.004168, 00:40:56.726 "iops": 17952.81726576363, 00:40:56.726 "mibps": 70.12819244438919, 00:40:56.726 "io_failed": 0, 00:40:56.726 "io_timeout": 0, 00:40:56.726 "avg_latency_us": 7126.004475040534, 00:40:56.726 "min_latency_us": 2107.530905446041, 00:40:56.726 "max_latency_us": 12754.66755763448 00:40:56.726 } 00:40:56.726 ], 00:40:56.726 "core_count": 1 00:40:56.726 } 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2174253 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2174253 ']' 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2174253 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174253 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:56.726 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174253' 00:40:56.727 killing process with pid 2174253 00:40:56.727 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2174253 00:40:56.727 Received shutdown signal, test time was about 10.000000 seconds 00:40:56.727 00:40:56.727 Latency(us) 00:40:56.727 [2024-10-09T09:21:16.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.727 [2024-10-09T09:21:16.729Z] =================================================================================================================== 00:40:56.727 [2024-10-09T09:21:16.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:56.727 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2174253 00:40:56.987 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:56.987 11:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:57.247 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:57.247 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:57.507 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:57.507 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:57.507 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:57.507 [2024-10-09 11:21:17.499707] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:57.767 request: 00:40:57.767 { 00:40:57.767 "uuid": "928c42a9-43a8-4626-99ee-af1c54b84dbf", 00:40:57.767 "method": "bdev_lvol_get_lvstores", 00:40:57.767 "req_id": 1 00:40:57.767 } 00:40:57.767 Got JSON-RPC error response 00:40:57.767 response: 00:40:57.767 { 00:40:57.767 "code": -19, 00:40:57.767 "message": "No such device" 00:40:57.767 } 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:57.767 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:58.027 aio_bdev 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:40:58.028 11:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:58.288 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 -t 2000 00:40:58.288 [ 00:40:58.288 { 00:40:58.288 "name": "5f2b9151-8b2a-41f2-aeea-9c12eca1ef44", 00:40:58.288 "aliases": [ 00:40:58.288 "lvs/lvol" 00:40:58.288 ], 00:40:58.288 "product_name": "Logical Volume", 00:40:58.288 "block_size": 4096, 00:40:58.288 "num_blocks": 38912, 00:40:58.288 "uuid": "5f2b9151-8b2a-41f2-aeea-9c12eca1ef44", 00:40:58.288 "assigned_rate_limits": { 00:40:58.288 "rw_ios_per_sec": 0, 00:40:58.288 "rw_mbytes_per_sec": 0, 00:40:58.288 "r_mbytes_per_sec": 0, 00:40:58.288 "w_mbytes_per_sec": 0 00:40:58.288 }, 00:40:58.288 "claimed": false, 00:40:58.288 "zoned": false, 00:40:58.288 "supported_io_types": { 00:40:58.288 "read": true, 00:40:58.288 "write": true, 00:40:58.288 "unmap": true, 00:40:58.288 "flush": false, 00:40:58.288 "reset": true, 00:40:58.288 "nvme_admin": false, 00:40:58.288 "nvme_io": false, 00:40:58.288 "nvme_io_md": false, 00:40:58.288 "write_zeroes": true, 00:40:58.288 "zcopy": false, 00:40:58.288 "get_zone_info": false, 00:40:58.288 "zone_management": false, 00:40:58.288 "zone_append": false, 00:40:58.288 "compare": false, 00:40:58.288 "compare_and_write": false, 00:40:58.288 "abort": false, 00:40:58.288 "seek_hole": true, 00:40:58.288 "seek_data": true, 00:40:58.288 "copy": false, 00:40:58.288 "nvme_iov_md": false 00:40:58.288 }, 00:40:58.288 "driver_specific": { 00:40:58.288 "lvol": { 00:40:58.288 "lvol_store_uuid": "928c42a9-43a8-4626-99ee-af1c54b84dbf", 00:40:58.288 "base_bdev": "aio_bdev", 00:40:58.288 "thin_provision": false, 00:40:58.288 "num_allocated_clusters": 38, 00:40:58.288 "snapshot": false, 00:40:58.288 "clone": false, 00:40:58.288 "esnap_clone": false 00:40:58.288 } 00:40:58.288 } 00:40:58.288 } 00:40:58.288 ] 00:40:58.288 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:40:58.288 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:58.288 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:58.548 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:58.548 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:58.548 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:58.809 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:58.809 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f2b9151-8b2a-41f2-aeea-9c12eca1ef44 00:40:58.809 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 928c42a9-43a8-4626-99ee-af1c54b84dbf 00:40:59.091 11:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:59.398 00:40:59.398 real 0m15.726s 00:40:59.398 user 0m15.231s 00:40:59.398 sys 0m1.467s 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:59.398 ************************************ 00:40:59.398 END TEST lvs_grow_clean 00:40:59.398 ************************************ 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:59.398 ************************************ 00:40:59.398 START TEST lvs_grow_dirty 00:40:59.398 ************************************ 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:59.398 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:59.659 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:59.659 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:59.659 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=287dca51-da0c-4608-9694-2780794fd710 00:40:59.659 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:40:59.659 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:59.919 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:59.919 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:59.919 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 287dca51-da0c-4608-9694-2780794fd710 lvol 150 00:41:00.179 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:00.179 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:00.180 11:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:00.180 [2024-10-09 11:21:20.147519] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:00.180 [2024-10-09 11:21:20.147588] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:00.180 true 00:41:00.180 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:00.180 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:00.440 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:00.440 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:00.700 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:00.700 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:00.961 [2024-10-09 11:21:20.832238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.961 11:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2177789 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2177789 /var/tmp/bdevperf.sock 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2177789 ']' 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:01.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:01.221 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:01.221 [2024-10-09 11:21:21.059144] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:01.221 [2024-10-09 11:21:21.059215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177789 ] 00:41:01.221 [2024-10-09 11:21:21.193217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:01.481 [2024-10-09 11:21:21.240161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.481 [2024-10-09 11:21:21.257358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:02.051 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:02.051 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:02.051 11:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:02.312 Nvme0n1 00:41:02.312 11:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:02.572 [ 00:41:02.572 { 00:41:02.572 "name": "Nvme0n1", 00:41:02.572 "aliases": [ 00:41:02.572 "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e" 00:41:02.572 ], 00:41:02.572 "product_name": "NVMe disk", 00:41:02.572 "block_size": 4096, 00:41:02.572 "num_blocks": 38912, 00:41:02.572 "uuid": "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e", 00:41:02.572 "numa_id": 0, 00:41:02.572 "assigned_rate_limits": { 00:41:02.572 "rw_ios_per_sec": 0, 00:41:02.572 "rw_mbytes_per_sec": 0, 00:41:02.572 "r_mbytes_per_sec": 0, 00:41:02.572 "w_mbytes_per_sec": 0 00:41:02.572 }, 00:41:02.572 "claimed": false, 00:41:02.572 "zoned": false, 00:41:02.572 "supported_io_types": { 00:41:02.572 "read": true, 00:41:02.572 "write": true, 00:41:02.572 "unmap": true, 00:41:02.572 "flush": true, 00:41:02.572 "reset": true, 00:41:02.572 "nvme_admin": true, 00:41:02.572 "nvme_io": true, 00:41:02.572 "nvme_io_md": false, 00:41:02.572 "write_zeroes": true, 00:41:02.572 "zcopy": false, 00:41:02.572 "get_zone_info": false, 00:41:02.572 "zone_management": false, 00:41:02.572 "zone_append": false, 00:41:02.572 "compare": true, 00:41:02.572 "compare_and_write": true, 00:41:02.572 "abort": true, 00:41:02.572 "seek_hole": false, 00:41:02.572 "seek_data": false, 00:41:02.572 "copy": true, 00:41:02.572 "nvme_iov_md": false 00:41:02.572 }, 00:41:02.572 "memory_domains": [ 00:41:02.572 { 00:41:02.572 "dma_device_id": "system", 00:41:02.572 "dma_device_type": 1 00:41:02.572 } 00:41:02.572 ], 00:41:02.572 "driver_specific": { 00:41:02.572 "nvme": [ 00:41:02.572 { 00:41:02.572 "trid": { 00:41:02.572 "trtype": "TCP", 00:41:02.572 "adrfam": "IPv4", 00:41:02.572 "traddr": "10.0.0.2", 00:41:02.572 "trsvcid": "4420", 00:41:02.572 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:02.572 }, 00:41:02.572 "ctrlr_data": { 00:41:02.572 "cntlid": 1, 00:41:02.572 "vendor_id": "0x8086", 00:41:02.572 "model_number": "SPDK bdev Controller", 00:41:02.572 "serial_number": "SPDK0", 00:41:02.572 "firmware_revision": "25.01", 00:41:02.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:02.572 "oacs": { 00:41:02.572 "security": 0, 00:41:02.572 "format": 0, 00:41:02.572 "firmware": 0, 00:41:02.572 "ns_manage": 0 00:41:02.572 }, 00:41:02.572 "multi_ctrlr": true, 00:41:02.572 "ana_reporting": false 00:41:02.572 }, 00:41:02.572 "vs": { 00:41:02.572 "nvme_version": "1.3" 00:41:02.572 }, 00:41:02.572 "ns_data": { 00:41:02.572 "id": 1, 00:41:02.572 "can_share": true 00:41:02.572 } 00:41:02.572 } 00:41:02.572 ], 00:41:02.572 "mp_policy": "active_passive" 00:41:02.572 } 00:41:02.572 } 00:41:02.572 ] 00:41:02.572 11:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:02.572 11:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2178039 00:41:02.573 11:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:02.573 Running I/O for 10 seconds... 00:41:03.514 Latency(us) 00:41:03.514 [2024-10-09T09:21:23.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:03.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.514 Nvme0n1 : 1.00 17746.00 69.32 0.00 0.00 0.00 0.00 0.00 00:41:03.514 [2024-10-09T09:21:23.516Z] =================================================================================================================== 00:41:03.514 [2024-10-09T09:21:23.516Z] Total : 17746.00 69.32 0.00 0.00 0.00 0.00 0.00 00:41:03.514 00:41:04.455 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 287dca51-da0c-4608-9694-2780794fd710 00:41:04.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.716 Nvme0n1 : 2.00 17824.50 69.63 0.00 0.00 0.00 0.00 0.00 00:41:04.716 [2024-10-09T09:21:24.718Z] =================================================================================================================== 00:41:04.716 [2024-10-09T09:21:24.718Z] Total : 17824.50 69.63 0.00 0.00 0.00 0.00 0.00 00:41:04.716 00:41:04.716 true 00:41:04.716 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:04.716 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:04.976 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:04.976 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:04.976 11:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2178039 00:41:05.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:05.546 Nvme0n1 : 3.00 17856.33 69.75 0.00 0.00 0.00 0.00 0.00 00:41:05.546 [2024-10-09T09:21:25.548Z] =================================================================================================================== 00:41:05.546 [2024-10-09T09:21:25.548Z] Total : 17856.33 69.75 0.00 0.00 0.00 0.00 0.00 00:41:05.546 00:41:06.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.933 Nvme0n1 : 4.00 17888.50 69.88 0.00 0.00 0.00 0.00 0.00 00:41:06.933 [2024-10-09T09:21:26.935Z] =================================================================================================================== 00:41:06.933 [2024-10-09T09:21:26.935Z] Total : 17888.50 69.88 0.00 0.00 0.00 0.00 0.00 00:41:06.933 00:41:07.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.503 Nvme0n1 : 5.00 17907.40 69.95 0.00 0.00 0.00 0.00 0.00 00:41:07.503 [2024-10-09T09:21:27.505Z] =================================================================================================================== 00:41:07.503 [2024-10-09T09:21:27.505Z] Total : 17907.40 69.95 0.00 0.00 0.00 0.00 0.00 00:41:07.503 00:41:08.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.885 Nvme0n1 : 6.00 17930.83 70.04 0.00 0.00 0.00 0.00 0.00 00:41:08.885 [2024-10-09T09:21:28.887Z] =================================================================================================================== 00:41:08.885 [2024-10-09T09:21:28.887Z] Total : 17930.83 70.04 0.00 0.00 0.00 0.00 0.00 00:41:08.885 00:41:09.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.827 Nvme0n1 : 7.00 17938.57 70.07 0.00 0.00 0.00 0.00 0.00 00:41:09.827 [2024-10-09T09:21:29.829Z] =================================================================================================================== 00:41:09.827 [2024-10-09T09:21:29.829Z] Total : 17938.57 70.07 0.00 0.00 0.00 0.00 0.00 00:41:09.827 00:41:10.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.769 Nvme0n1 : 8.00 17952.25 70.13 0.00 0.00 0.00 0.00 0.00 00:41:10.769 [2024-10-09T09:21:30.771Z] =================================================================================================================== 00:41:10.769 [2024-10-09T09:21:30.771Z] Total : 17952.25 70.13 0.00 0.00 0.00 0.00 0.00 00:41:10.769 00:41:11.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.710 Nvme0n1 : 9.00 17962.89 70.17 0.00 0.00 0.00 0.00 0.00 00:41:11.710 [2024-10-09T09:21:31.712Z] =================================================================================================================== 00:41:11.710 [2024-10-09T09:21:31.712Z] Total : 17962.89 70.17 0.00 0.00 0.00 0.00 0.00 00:41:11.710 00:41:12.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.652 Nvme0n1 : 10.00 17971.40 70.20 0.00 0.00 0.00 0.00 0.00 00:41:12.652 [2024-10-09T09:21:32.654Z] =================================================================================================================== 00:41:12.652 [2024-10-09T09:21:32.654Z] Total : 17971.40 70.20 0.00 0.00 0.00 0.00 0.00 00:41:12.652 00:41:12.652 00:41:12.652 Latency(us) 00:41:12.652 [2024-10-09T09:21:32.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.652 Nvme0n1 : 10.01 17972.52 70.21 0.00 0.00 7118.17 2039.10 13192.60 00:41:12.652 [2024-10-09T09:21:32.654Z] =================================================================================================================== 00:41:12.652 [2024-10-09T09:21:32.654Z] Total : 17972.52 70.21 0.00 0.00 7118.17 2039.10 13192.60 00:41:12.652 { 00:41:12.652 "results": [ 00:41:12.652 { 00:41:12.652 "job": "Nvme0n1", 00:41:12.652 "core_mask": "0x2", 00:41:12.652 "workload": "randwrite", 00:41:12.652 "status": "finished", 00:41:12.652 "queue_depth": 128, 00:41:12.652 "io_size": 4096, 00:41:12.652 "runtime": 10.006501, 00:41:12.652 "iops": 17972.516067304645, 00:41:12.652 "mibps": 70.20514088790877, 00:41:12.652 "io_failed": 0, 00:41:12.652 "io_timeout": 0, 00:41:12.652 "avg_latency_us": 7118.168612740753, 00:41:12.652 "min_latency_us": 2039.1045773471433, 00:41:12.652 "max_latency_us": 13192.596057467425 00:41:12.652 } 00:41:12.652 ], 00:41:12.653 "core_count": 1 00:41:12.653 } 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2177789 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2177789 ']' 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2177789 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2177789 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2177789' 00:41:12.653 killing process with pid 2177789 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2177789 00:41:12.653 Received shutdown signal, test time was about 10.000000 seconds 00:41:12.653 00:41:12.653 Latency(us) 00:41:12.653 [2024-10-09T09:21:32.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.653 [2024-10-09T09:21:32.655Z] =================================================================================================================== 00:41:12.653 [2024-10-09T09:21:32.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:12.653 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2177789 00:41:12.914 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:12.914 11:21:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:13.175 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:13.175 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2173726 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2173726 00:41:13.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2173726 Killed "${NVMF_APP[@]}" "$@" 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2180045 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2180045 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2180045 ']' 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:13.436 11:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.436 [2024-10-09 11:21:33.348601] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:13.436 [2024-10-09 11:21:33.349628] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:13.436 [2024-10-09 11:21:33.349674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:13.696 [2024-10-09 11:21:33.487239] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:13.696 [2024-10-09 11:21:33.518451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.696 [2024-10-09 11:21:33.535363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:13.696 [2024-10-09 11:21:33.535393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:13.696 [2024-10-09 11:21:33.535401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:13.696 [2024-10-09 11:21:33.535408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:13.696 [2024-10-09 11:21:33.535414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:13.696 [2024-10-09 11:21:33.535928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.696 [2024-10-09 11:21:33.583526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:13.696 [2024-10-09 11:21:33.583775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:14.268 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:14.529 [2024-10-09 11:21:34.326974] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:14.529 [2024-10-09 11:21:34.327077] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:14.529 [2024-10-09 11:21:34.327109] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:14.529 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3ac19da-2b9b-47ea-8f26-6e761af3fd7e -t 2000 00:41:14.790 [ 00:41:14.790 { 00:41:14.790 "name": "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e", 00:41:14.790 "aliases": [ 00:41:14.790 "lvs/lvol" 00:41:14.790 ], 00:41:14.790 "product_name": "Logical Volume", 00:41:14.790 "block_size": 4096, 00:41:14.790 "num_blocks": 38912, 00:41:14.790 "uuid": "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e", 00:41:14.790 "assigned_rate_limits": { 00:41:14.790 "rw_ios_per_sec": 0, 00:41:14.790 "rw_mbytes_per_sec": 0, 00:41:14.790 "r_mbytes_per_sec": 0, 00:41:14.790 "w_mbytes_per_sec": 0 00:41:14.790 }, 00:41:14.790 "claimed": false, 00:41:14.790 "zoned": false, 00:41:14.790 "supported_io_types": { 00:41:14.790 "read": true, 00:41:14.790 "write": true, 00:41:14.790 "unmap": true, 00:41:14.790 "flush": false, 00:41:14.790 "reset": true, 00:41:14.790 "nvme_admin": false, 00:41:14.790 "nvme_io": false, 00:41:14.790 "nvme_io_md": false, 00:41:14.790 "write_zeroes": true, 00:41:14.790 "zcopy": false, 00:41:14.790 "get_zone_info": false, 00:41:14.790 "zone_management": false, 00:41:14.790 "zone_append": false, 00:41:14.790 "compare": false, 00:41:14.790 "compare_and_write": false, 00:41:14.790 "abort": false, 00:41:14.790 "seek_hole": true, 00:41:14.790 "seek_data": true, 00:41:14.790 "copy": false, 00:41:14.790 "nvme_iov_md": false 00:41:14.790 }, 00:41:14.790 "driver_specific": { 00:41:14.790 "lvol": { 00:41:14.790 "lvol_store_uuid": "287dca51-da0c-4608-9694-2780794fd710", 00:41:14.790 "base_bdev": "aio_bdev", 00:41:14.790 "thin_provision": false, 00:41:14.790 "num_allocated_clusters": 38, 00:41:14.790 "snapshot": false, 00:41:14.790 "clone": false, 00:41:14.790 "esnap_clone": false 00:41:14.790 } 00:41:14.790 } 00:41:14.790 } 00:41:14.790 ] 00:41:14.790 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:14.790 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:14.790 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:15.051 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:15.051 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:15.051 11:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:15.312 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:15.312 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:15.313 [2024-10-09 11:21:35.216356] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:15.313 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:15.574 request: 00:41:15.574 { 00:41:15.574 "uuid": "287dca51-da0c-4608-9694-2780794fd710", 00:41:15.574 "method": "bdev_lvol_get_lvstores", 00:41:15.574 "req_id": 1 00:41:15.574 } 00:41:15.574 Got JSON-RPC error response 00:41:15.574 response: 00:41:15.574 { 00:41:15.574 "code": -19, 00:41:15.574 "message": "No such device" 00:41:15.574 } 00:41:15.574 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:41:15.574 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:15.574 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:15.574 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:15.574 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:15.835 aio_bdev 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:15.835 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3ac19da-2b9b-47ea-8f26-6e761af3fd7e -t 2000 00:41:16.104 [ 00:41:16.104 { 00:41:16.104 "name": "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e", 00:41:16.104 "aliases": [ 00:41:16.104 "lvs/lvol" 00:41:16.104 ], 00:41:16.104 "product_name": "Logical Volume", 00:41:16.104 "block_size": 4096, 00:41:16.104 "num_blocks": 38912, 00:41:16.104 "uuid": "b3ac19da-2b9b-47ea-8f26-6e761af3fd7e", 00:41:16.104 "assigned_rate_limits": { 00:41:16.104 "rw_ios_per_sec": 0, 00:41:16.104 "rw_mbytes_per_sec": 0, 00:41:16.104 "r_mbytes_per_sec": 0, 00:41:16.104 "w_mbytes_per_sec": 0 00:41:16.104 }, 00:41:16.104 "claimed": false, 00:41:16.104 "zoned": false, 00:41:16.105 "supported_io_types": { 00:41:16.105 "read": true, 00:41:16.105 "write": true, 00:41:16.105 "unmap": true, 00:41:16.105 "flush": false, 00:41:16.105 "reset": true, 00:41:16.105 "nvme_admin": false, 00:41:16.105 "nvme_io": false, 00:41:16.105 "nvme_io_md": false, 00:41:16.105 "write_zeroes": true, 00:41:16.105 "zcopy": false, 00:41:16.105 "get_zone_info": false, 00:41:16.105 "zone_management": false, 00:41:16.105 "zone_append": false, 00:41:16.105 "compare": false, 00:41:16.105 "compare_and_write": false, 00:41:16.105 "abort": false, 00:41:16.105 "seek_hole": true, 00:41:16.105 "seek_data": true, 00:41:16.105 "copy": false, 00:41:16.105 "nvme_iov_md": false 00:41:16.105 }, 00:41:16.105 "driver_specific": { 00:41:16.105 "lvol": { 00:41:16.105 "lvol_store_uuid": "287dca51-da0c-4608-9694-2780794fd710", 00:41:16.105 "base_bdev": "aio_bdev", 00:41:16.105 "thin_provision": false, 00:41:16.105 "num_allocated_clusters": 38, 00:41:16.105 "snapshot": false, 00:41:16.105 "clone": false, 00:41:16.105 "esnap_clone": false 00:41:16.105 } 00:41:16.105 } 00:41:16.105 } 00:41:16.105 ] 00:41:16.105 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:41:16.105 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:16.105 11:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:16.370 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:16.370 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:16.370 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 287dca51-da0c-4608-9694-2780794fd710 00:41:16.370 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:16.370 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3ac19da-2b9b-47ea-8f26-6e761af3fd7e 00:41:16.630 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 287dca51-da0c-4608-9694-2780794fd710 00:41:16.890 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:16.890 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:17.150 00:41:17.150 real 0m17.619s 00:41:17.150 user 0m35.106s 00:41:17.150 sys 0m3.109s 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:17.150 ************************************ 00:41:17.150 END TEST lvs_grow_dirty 00:41:17.150 ************************************ 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:17.150 nvmf_trace.0 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:17.150 11:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:17.150 rmmod nvme_tcp 00:41:17.150 rmmod nvme_fabrics 00:41:17.150 rmmod nvme_keyring 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2180045 ']' 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2180045 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2180045 ']' 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2180045 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2180045 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:17.150 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2180045' 00:41:17.150 killing process with pid 2180045 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2180045 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2180045 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:41:17.409 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:17.410 11:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:19.954 00:41:19.954 real 0m44.428s 00:41:19.954 user 0m53.198s 00:41:19.954 sys 0m10.419s 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:19.954 ************************************ 00:41:19.954 END TEST nvmf_lvs_grow 00:41:19.954 ************************************ 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:19.954 ************************************ 00:41:19.954 START TEST nvmf_bdev_io_wait 00:41:19.954 ************************************ 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:19.954 * Looking for test storage... 00:41:19.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:19.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.954 --rc genhtml_branch_coverage=1 00:41:19.954 --rc genhtml_function_coverage=1 00:41:19.954 --rc genhtml_legend=1 00:41:19.954 --rc geninfo_all_blocks=1 00:41:19.954 --rc geninfo_unexecuted_blocks=1 00:41:19.954 00:41:19.954 ' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:19.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.954 --rc genhtml_branch_coverage=1 00:41:19.954 --rc genhtml_function_coverage=1 00:41:19.954 --rc genhtml_legend=1 00:41:19.954 --rc geninfo_all_blocks=1 00:41:19.954 --rc geninfo_unexecuted_blocks=1 00:41:19.954 00:41:19.954 ' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:19.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.954 --rc genhtml_branch_coverage=1 00:41:19.954 --rc genhtml_function_coverage=1 00:41:19.954 --rc genhtml_legend=1 00:41:19.954 --rc geninfo_all_blocks=1 00:41:19.954 --rc geninfo_unexecuted_blocks=1 00:41:19.954 00:41:19.954 ' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:19.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.954 --rc genhtml_branch_coverage=1 00:41:19.954 --rc genhtml_function_coverage=1 00:41:19.954 --rc genhtml_legend=1 00:41:19.954 --rc geninfo_all_blocks=1 00:41:19.954 --rc geninfo_unexecuted_blocks=1 00:41:19.954 00:41:19.954 ' 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.954 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:19.955 11:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:28.102 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:28.102 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:28.102 Found net devices under 0000:31:00.0: cvl_0_0 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:28.102 Found net devices under 0000:31:00.1: cvl_0_1 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:28.102 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:28.103 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:28.103 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:28.103 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:28.103 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:28.103 11:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:28.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:28.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:41:28.103 00:41:28.103 --- 10.0.0.2 ping statistics --- 00:41:28.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.103 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:28.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:28.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:41:28.103 00:41:28.103 --- 10.0.0.1 ping statistics --- 00:41:28.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:28.103 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2184945 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2184945 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2184945 ']' 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:28.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:28.103 11:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.103 [2024-10-09 11:21:47.322684] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:28.103 [2024-10-09 11:21:47.323805] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:28.103 [2024-10-09 11:21:47.323852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:28.103 [2024-10-09 11:21:47.464051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:28.103 [2024-10-09 11:21:47.496626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:28.103 [2024-10-09 11:21:47.521490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:28.103 [2024-10-09 11:21:47.521526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:28.103 [2024-10-09 11:21:47.521534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:28.103 [2024-10-09 11:21:47.521541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:28.103 [2024-10-09 11:21:47.521547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:28.103 [2024-10-09 11:21:47.523257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.103 [2024-10-09 11:21:47.523399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:28.103 [2024-10-09 11:21:47.523778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:28.103 [2024-10-09 11:21:47.523779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.103 [2024-10-09 11:21:47.524173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 [2024-10-09 11:21:48.240862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:28.364 [2024-10-09 11:21:48.241131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:28.364 [2024-10-09 11:21:48.241846] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:28.364 [2024-10-09 11:21:48.241917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 [2024-10-09 11:21:48.252403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 Malloc0 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:28.364 [2024-10-09 11:21:48.316526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2185286 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2185289 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:28.364 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:28.364 { 00:41:28.364 "params": { 00:41:28.364 "name": "Nvme$subsystem", 00:41:28.365 "trtype": "$TEST_TRANSPORT", 00:41:28.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "$NVMF_PORT", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.365 "hdgst": ${hdgst:-false}, 00:41:28.365 "ddgst": ${ddgst:-false} 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 } 00:41:28.365 EOF 00:41:28.365 )") 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2185292 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2185296 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:28.365 { 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme$subsystem", 00:41:28.365 "trtype": "$TEST_TRANSPORT", 00:41:28.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "$NVMF_PORT", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.365 "hdgst": ${hdgst:-false}, 00:41:28.365 "ddgst": ${ddgst:-false} 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 } 00:41:28.365 EOF 00:41:28.365 )") 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:28.365 { 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme$subsystem", 00:41:28.365 "trtype": "$TEST_TRANSPORT", 00:41:28.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "$NVMF_PORT", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.365 "hdgst": ${hdgst:-false}, 00:41:28.365 "ddgst": ${ddgst:-false} 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 } 00:41:28.365 EOF 00:41:28.365 )") 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:28.365 { 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme$subsystem", 00:41:28.365 "trtype": "$TEST_TRANSPORT", 00:41:28.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "$NVMF_PORT", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:28.365 "hdgst": ${hdgst:-false}, 00:41:28.365 "ddgst": ${ddgst:-false} 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 } 00:41:28.365 EOF 00:41:28.365 )") 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2185286 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme1", 00:41:28.365 "trtype": "tcp", 00:41:28.365 "traddr": "10.0.0.2", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "4420", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.365 "hdgst": false, 00:41:28.365 "ddgst": false 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 }' 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme1", 00:41:28.365 "trtype": "tcp", 00:41:28.365 "traddr": "10.0.0.2", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "4420", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.365 "hdgst": false, 00:41:28.365 "ddgst": false 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 }' 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme1", 00:41:28.365 "trtype": "tcp", 00:41:28.365 "traddr": "10.0.0.2", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "4420", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.365 "hdgst": false, 00:41:28.365 "ddgst": false 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 }' 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:41:28.365 11:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:28.365 "params": { 00:41:28.365 "name": "Nvme1", 00:41:28.365 "trtype": "tcp", 00:41:28.365 "traddr": "10.0.0.2", 00:41:28.365 "adrfam": "ipv4", 00:41:28.365 "trsvcid": "4420", 00:41:28.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:28.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:28.365 "hdgst": false, 00:41:28.365 "ddgst": false 00:41:28.365 }, 00:41:28.365 "method": "bdev_nvme_attach_controller" 00:41:28.365 }' 00:41:28.625 [2024-10-09 11:21:48.370298] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:28.626 [2024-10-09 11:21:48.370347] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:28.626 [2024-10-09 11:21:48.373336] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:28.626 [2024-10-09 11:21:48.373392] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:28.626 [2024-10-09 11:21:48.376054] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:28.626 [2024-10-09 11:21:48.376100] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:28.626 [2024-10-09 11:21:48.384481] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:28.626 [2024-10-09 11:21:48.384529] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:28.626 [2024-10-09 11:21:48.559847] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:28.626 [2024-10-09 11:21:48.609881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.626 [2024-10-09 11:21:48.612846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:28.626 [2024-10-09 11:21:48.621452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:28.885 [2024-10-09 11:21:48.658923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:28.885 [2024-10-09 11:21:48.662703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.885 [2024-10-09 11:21:48.673185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:28.885 [2024-10-09 11:21:48.706939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.885 [2024-10-09 11:21:48.717566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:41:28.885 [2024-10-09 11:21:48.718271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:28.885 [2024-10-09 11:21:48.767362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.885 Running I/O for 1 seconds... 00:41:28.885 [2024-10-09 11:21:48.779107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:28.885 Running I/O for 1 seconds... 00:41:29.146 Running I/O for 1 seconds... 00:41:29.146 Running I/O for 1 seconds... 00:41:30.086 12570.00 IOPS, 49.10 MiB/s 00:41:30.086 Latency(us) 00:41:30.086 [2024-10-09T09:21:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:30.086 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:30.086 Nvme1n1 : 1.01 12576.08 49.13 0.00 0.00 10118.57 1984.36 15218.02 00:41:30.086 [2024-10-09T09:21:50.088Z] =================================================================================================================== 00:41:30.086 [2024-10-09T09:21:50.088Z] Total : 12576.08 49.13 0.00 0.00 10118.57 1984.36 15218.02 00:41:30.086 12638.00 IOPS, 49.37 MiB/s 00:41:30.086 Latency(us) 00:41:30.086 [2024-10-09T09:21:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:30.086 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:30.086 Nvme1n1 : 1.01 12673.85 49.51 0.00 0.00 10061.72 5282.51 15218.02 00:41:30.086 [2024-10-09T09:21:50.088Z] =================================================================================================================== 00:41:30.086 [2024-10-09T09:21:50.088Z] Total : 12673.85 49.51 0.00 0.00 10061.72 5282.51 15218.02 00:41:30.086 11:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2185289 00:41:30.086 11:21:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2185292 00:41:30.086 13388.00 IOPS, 52.30 MiB/s 00:41:30.086 Latency(us) 00:41:30.086 [2024-10-09T09:21:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:30.086 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:30.086 Nvme1n1 : 1.01 13514.90 52.79 0.00 0.00 9450.93 2285.44 21677.46 00:41:30.086 [2024-10-09T09:21:50.088Z] =================================================================================================================== 00:41:30.086 [2024-10-09T09:21:50.088Z] Total : 13514.90 52.79 0.00 0.00 9450.93 2285.44 21677.46 00:41:30.086 188616.00 IOPS, 736.78 MiB/s 00:41:30.086 Latency(us) 00:41:30.086 [2024-10-09T09:21:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:30.086 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:30.086 Nvme1n1 : 1.00 188243.18 735.32 0.00 0.00 675.92 306.21 1970.68 00:41:30.086 [2024-10-09T09:21:50.088Z] =================================================================================================================== 00:41:30.086 [2024-10-09T09:21:50.088Z] Total : 188243.18 735.32 0.00 0.00 675.92 306.21 1970.68 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2185296 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.346 rmmod nvme_tcp 00:41:30.346 rmmod nvme_fabrics 00:41:30.346 rmmod nvme_keyring 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2184945 ']' 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2184945 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2184945 ']' 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2184945 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2184945 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2184945' 00:41:30.346 killing process with pid 2184945 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2184945 00:41:30.346 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2184945 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:30.606 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:30.607 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:30.607 11:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:32.586 00:41:32.586 real 0m13.057s 00:41:32.586 user 0m15.100s 00:41:32.586 sys 0m7.484s 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.586 ************************************ 00:41:32.586 END TEST nvmf_bdev_io_wait 00:41:32.586 ************************************ 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:32.586 ************************************ 00:41:32.586 START TEST nvmf_queue_depth 00:41:32.586 ************************************ 00:41:32.586 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:32.847 * Looking for test storage... 00:41:32.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:32.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.847 --rc genhtml_branch_coverage=1 00:41:32.847 --rc genhtml_function_coverage=1 00:41:32.847 --rc genhtml_legend=1 00:41:32.847 --rc geninfo_all_blocks=1 00:41:32.847 --rc geninfo_unexecuted_blocks=1 00:41:32.847 00:41:32.847 ' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:32.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.847 --rc genhtml_branch_coverage=1 00:41:32.847 --rc genhtml_function_coverage=1 00:41:32.847 --rc genhtml_legend=1 00:41:32.847 --rc geninfo_all_blocks=1 00:41:32.847 --rc geninfo_unexecuted_blocks=1 00:41:32.847 00:41:32.847 ' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:32.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.847 --rc genhtml_branch_coverage=1 00:41:32.847 --rc genhtml_function_coverage=1 00:41:32.847 --rc genhtml_legend=1 00:41:32.847 --rc geninfo_all_blocks=1 00:41:32.847 --rc geninfo_unexecuted_blocks=1 00:41:32.847 00:41:32.847 ' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:32.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:32.847 --rc genhtml_branch_coverage=1 00:41:32.847 --rc genhtml_function_coverage=1 00:41:32.847 --rc genhtml_legend=1 00:41:32.847 --rc geninfo_all_blocks=1 00:41:32.847 --rc geninfo_unexecuted_blocks=1 00:41:32.847 00:41:32.847 ' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.847 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:32.848 11:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:40.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:40.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:40.984 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:40.985 Found net devices under 0000:31:00.0: cvl_0_0 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:40.985 Found net devices under 0000:31:00.1: cvl_0_1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:40.985 11:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:40.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:40.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:41:40.985 00:41:40.985 --- 10.0.0.2 ping statistics --- 00:41:40.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.985 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:40.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:40.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:41:40.985 00:41:40.985 --- 10.0.0.1 ping statistics --- 00:41:40.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:40.985 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2189727 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2189727 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2189727 ']' 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:40.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:40.985 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:40.985 [2024-10-09 11:22:00.206878] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:40.985 [2024-10-09 11:22:00.207875] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:40.985 [2024-10-09 11:22:00.207915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:40.985 [2024-10-09 11:22:00.348204] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:40.985 [2024-10-09 11:22:00.385858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.985 [2024-10-09 11:22:00.402812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:40.985 [2024-10-09 11:22:00.402843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:40.985 [2024-10-09 11:22:00.402851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:40.985 [2024-10-09 11:22:00.402858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:40.985 [2024-10-09 11:22:00.402863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:40.985 [2024-10-09 11:22:00.403420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.985 [2024-10-09 11:22:00.451143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:40.985 [2024-10-09 11:22:00.451394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:41.246 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:41.246 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:41.246 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:41.246 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:41.246 11:22:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 [2024-10-09 11:22:01.032145] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 Malloc0 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 [2024-10-09 11:22:01.096273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2190072 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2190072 /var/tmp/bdevperf.sock 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2190072 ']' 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:41.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.246 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:41.246 [2024-10-09 11:22:01.151033] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:41:41.246 [2024-10-09 11:22:01.151081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190072 ] 00:41:41.507 [2024-10-09 11:22:01.280949] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:41:41.507 [2024-10-09 11:22:01.311553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:41.507 [2024-10-09 11:22:01.329857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.075 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.075 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:41:42.075 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:42.075 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.075 11:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.335 NVMe0n1 00:41:42.335 11:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.335 11:22:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:42.335 Running I/O for 10 seconds... 00:41:44.219 9131.00 IOPS, 35.67 MiB/s [2024-10-09T09:22:05.606Z] 9216.50 IOPS, 36.00 MiB/s [2024-10-09T09:22:06.548Z] 9266.33 IOPS, 36.20 MiB/s [2024-10-09T09:22:07.488Z] 9751.25 IOPS, 38.09 MiB/s [2024-10-09T09:22:08.431Z] 10242.20 IOPS, 40.01 MiB/s [2024-10-09T09:22:09.370Z] 10561.50 IOPS, 41.26 MiB/s [2024-10-09T09:22:10.311Z] 10782.43 IOPS, 42.12 MiB/s [2024-10-09T09:22:11.265Z] 10958.88 IOPS, 42.81 MiB/s [2024-10-09T09:22:12.656Z] 11078.22 IOPS, 43.27 MiB/s [2024-10-09T09:22:12.656Z] 11202.40 IOPS, 43.76 MiB/s 00:41:52.654 Latency(us) 00:41:52.654 [2024-10-09T09:22:12.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.654 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:52.654 Verification LBA range: start 0x0 length 0x4000 00:41:52.654 NVMe0n1 : 10.09 11198.32 43.74 0.00 0.00 90742.65 9743.91 66565.13 00:41:52.654 [2024-10-09T09:22:12.656Z] =================================================================================================================== 00:41:52.654 [2024-10-09T09:22:12.656Z] Total : 11198.32 43.74 0.00 0.00 90742.65 9743.91 66565.13 00:41:52.654 { 00:41:52.654 "results": [ 00:41:52.654 { 00:41:52.654 "job": "NVMe0n1", 00:41:52.654 "core_mask": "0x1", 00:41:52.654 "workload": "verify", 00:41:52.654 "status": "finished", 00:41:52.654 "verify_range": { 00:41:52.654 "start": 0, 00:41:52.654 "length": 16384 00:41:52.654 }, 00:41:52.654 "queue_depth": 1024, 00:41:52.654 "io_size": 4096, 00:41:52.654 "runtime": 10.088482, 00:41:52.654 "iops": 11198.315068609925, 00:41:52.654 "mibps": 43.74341823675752, 00:41:52.654 "io_failed": 0, 00:41:52.654 "io_timeout": 0, 00:41:52.654 "avg_latency_us": 90742.64680055447, 00:41:52.654 "min_latency_us": 9743.909121282994, 00:41:52.654 "max_latency_us": 66565.13197460742 00:41:52.654 } 00:41:52.654 ], 00:41:52.654 "core_count": 1 00:41:52.654 } 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2190072 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2190072 ']' 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2190072 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2190072 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2190072' 00:41:52.654 killing process with pid 2190072 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2190072 00:41:52.654 Received shutdown signal, test time was about 10.000000 seconds 00:41:52.654 00:41:52.654 Latency(us) 00:41:52.654 [2024-10-09T09:22:12.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.654 [2024-10-09T09:22:12.656Z] =================================================================================================================== 00:41:52.654 [2024-10-09T09:22:12.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2190072 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:52.654 rmmod nvme_tcp 00:41:52.654 rmmod nvme_fabrics 00:41:52.654 rmmod nvme_keyring 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2189727 ']' 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2189727 00:41:52.654 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2189727 ']' 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2189727 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2189727 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2189727' 00:41:52.655 killing process with pid 2189727 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2189727 00:41:52.655 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2189727 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:52.916 11:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:54.827 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:54.827 00:41:54.827 real 0m22.279s 00:41:54.827 user 0m24.490s 00:41:54.827 sys 0m7.192s 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.088 ************************************ 00:41:55.088 END TEST nvmf_queue_depth 00:41:55.088 ************************************ 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:55.088 ************************************ 00:41:55.088 START TEST nvmf_target_multipath 00:41:55.088 ************************************ 00:41:55.088 11:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:55.088 * Looking for test storage... 00:41:55.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:55.088 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:55.088 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:41:55.088 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:55.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.350 --rc genhtml_branch_coverage=1 00:41:55.350 --rc genhtml_function_coverage=1 00:41:55.350 --rc genhtml_legend=1 00:41:55.350 --rc geninfo_all_blocks=1 00:41:55.350 --rc geninfo_unexecuted_blocks=1 00:41:55.350 00:41:55.350 ' 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:55.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.350 --rc genhtml_branch_coverage=1 00:41:55.350 --rc genhtml_function_coverage=1 00:41:55.350 --rc genhtml_legend=1 00:41:55.350 --rc geninfo_all_blocks=1 00:41:55.350 --rc geninfo_unexecuted_blocks=1 00:41:55.350 00:41:55.350 ' 00:41:55.350 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:55.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.350 --rc genhtml_branch_coverage=1 00:41:55.350 --rc genhtml_function_coverage=1 00:41:55.350 --rc genhtml_legend=1 00:41:55.350 --rc geninfo_all_blocks=1 00:41:55.350 --rc geninfo_unexecuted_blocks=1 00:41:55.350 00:41:55.350 ' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:55.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.351 --rc genhtml_branch_coverage=1 00:41:55.351 --rc genhtml_function_coverage=1 00:41:55.351 --rc genhtml_legend=1 00:41:55.351 --rc geninfo_all_blocks=1 00:41:55.351 --rc geninfo_unexecuted_blocks=1 00:41:55.351 00:41:55.351 ' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:55.351 11:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:03.495 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:03.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:03.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:03.496 Found net devices under 0000:31:00.0: cvl_0_0 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:03.496 Found net devices under 0000:31:00.1: cvl_0_1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:03.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:03.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:42:03.496 00:42:03.496 --- 10.0.0.2 ping statistics --- 00:42:03.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.496 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:03.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:03.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:42:03.496 00:42:03.496 --- 10.0.0.1 ping statistics --- 00:42:03.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.496 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:03.496 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:03.497 only one NIC for nvmf test 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:03.497 rmmod nvme_tcp 00:42:03.497 rmmod nvme_fabrics 00:42:03.497 rmmod nvme_keyring 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:03.497 11:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:04.880 00:42:04.880 real 0m9.702s 00:42:04.880 user 0m2.160s 00:42:04.880 sys 0m5.460s 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:04.880 ************************************ 00:42:04.880 END TEST nvmf_target_multipath 00:42:04.880 ************************************ 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:04.880 ************************************ 00:42:04.880 START TEST nvmf_zcopy 00:42:04.880 ************************************ 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:04.880 * Looking for test storage... 00:42:04.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:04.880 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:05.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.141 --rc genhtml_branch_coverage=1 00:42:05.141 --rc genhtml_function_coverage=1 00:42:05.141 --rc genhtml_legend=1 00:42:05.141 --rc geninfo_all_blocks=1 00:42:05.141 --rc geninfo_unexecuted_blocks=1 00:42:05.141 00:42:05.141 ' 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:05.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.141 --rc genhtml_branch_coverage=1 00:42:05.141 --rc genhtml_function_coverage=1 00:42:05.141 --rc genhtml_legend=1 00:42:05.141 --rc geninfo_all_blocks=1 00:42:05.141 --rc geninfo_unexecuted_blocks=1 00:42:05.141 00:42:05.141 ' 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:05.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.141 --rc genhtml_branch_coverage=1 00:42:05.141 --rc genhtml_function_coverage=1 00:42:05.141 --rc genhtml_legend=1 00:42:05.141 --rc geninfo_all_blocks=1 00:42:05.141 --rc geninfo_unexecuted_blocks=1 00:42:05.141 00:42:05.141 ' 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:05.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:05.141 --rc genhtml_branch_coverage=1 00:42:05.141 --rc genhtml_function_coverage=1 00:42:05.141 --rc genhtml_legend=1 00:42:05.141 --rc geninfo_all_blocks=1 00:42:05.141 --rc geninfo_unexecuted_blocks=1 00:42:05.141 00:42:05.141 ' 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:05.141 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:05.142 11:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:13.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:13.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:13.285 Found net devices under 0000:31:00.0: cvl_0_0 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:13.285 Found net devices under 0000:31:00.1: cvl_0_1 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:13.285 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:13.286 11:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:13.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:13.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:42:13.286 00:42:13.286 --- 10.0.0.2 ping statistics --- 00:42:13.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.286 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:13.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:13.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:42:13.286 00:42:13.286 --- 10.0.0.1 ping statistics --- 00:42:13.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:13.286 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2200537 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2200537 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2200537 ']' 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:13.286 11:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 [2024-10-09 11:22:32.272610] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:13.286 [2024-10-09 11:22:32.273613] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:42:13.286 [2024-10-09 11:22:32.273652] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:13.286 [2024-10-09 11:22:32.410696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:13.286 [2024-10-09 11:22:32.459083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.286 [2024-10-09 11:22:32.480768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:13.286 [2024-10-09 11:22:32.480807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:13.286 [2024-10-09 11:22:32.480815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:13.286 [2024-10-09 11:22:32.480823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:13.286 [2024-10-09 11:22:32.480829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:13.286 [2024-10-09 11:22:32.481461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:13.286 [2024-10-09 11:22:32.539141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:13.286 [2024-10-09 11:22:32.539420] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 [2024-10-09 11:22:33.106231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 [2024-10-09 11:22:33.134461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 malloc0 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:13.286 { 00:42:13.286 "params": { 00:42:13.286 "name": "Nvme$subsystem", 00:42:13.286 "trtype": "$TEST_TRANSPORT", 00:42:13.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:13.286 "adrfam": "ipv4", 00:42:13.286 "trsvcid": "$NVMF_PORT", 00:42:13.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:13.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:13.286 "hdgst": ${hdgst:-false}, 00:42:13.286 "ddgst": ${ddgst:-false} 00:42:13.286 }, 00:42:13.286 "method": "bdev_nvme_attach_controller" 00:42:13.286 } 00:42:13.286 EOF 00:42:13.286 )") 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:42:13.286 11:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:13.286 "params": { 00:42:13.286 "name": "Nvme1", 00:42:13.286 "trtype": "tcp", 00:42:13.287 "traddr": "10.0.0.2", 00:42:13.287 "adrfam": "ipv4", 00:42:13.287 "trsvcid": "4420", 00:42:13.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:13.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:13.287 "hdgst": false, 00:42:13.287 "ddgst": false 00:42:13.287 }, 00:42:13.287 "method": "bdev_nvme_attach_controller" 00:42:13.287 }' 00:42:13.287 [2024-10-09 11:22:33.235757] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:42:13.287 [2024-10-09 11:22:33.235811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2200598 ] 00:42:13.547 [2024-10-09 11:22:33.367368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:13.547 [2024-10-09 11:22:33.398260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:13.547 [2024-10-09 11:22:33.416727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.808 Running I/O for 10 seconds... 00:42:15.692 6501.00 IOPS, 50.79 MiB/s [2024-10-09T09:22:37.101Z] 6541.00 IOPS, 51.10 MiB/s [2024-10-09T09:22:38.044Z] 6554.33 IOPS, 51.21 MiB/s [2024-10-09T09:22:38.985Z] 6568.50 IOPS, 51.32 MiB/s [2024-10-09T09:22:39.926Z] 6785.60 IOPS, 53.01 MiB/s [2024-10-09T09:22:40.959Z] 7240.00 IOPS, 56.56 MiB/s [2024-10-09T09:22:41.901Z] 7566.86 IOPS, 59.12 MiB/s [2024-10-09T09:22:42.843Z] 7812.75 IOPS, 61.04 MiB/s [2024-10-09T09:22:43.785Z] 8002.78 IOPS, 62.52 MiB/s [2024-10-09T09:22:43.785Z] 8153.40 IOPS, 63.70 MiB/s 00:42:23.783 Latency(us) 00:42:23.783 [2024-10-09T09:22:43.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:23.783 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:23.783 Verification LBA range: start 0x0 length 0x1000 00:42:23.783 Nvme1n1 : 10.01 8158.03 63.73 0.00 0.00 15636.92 1792.77 27589.50 00:42:23.783 [2024-10-09T09:22:43.785Z] =================================================================================================================== 00:42:23.783 [2024-10-09T09:22:43.785Z] Total : 8158.03 63.73 0.00 0.00 15636.92 1792.77 27589.50 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2202584 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:42:24.045 { 00:42:24.045 "params": { 00:42:24.045 "name": "Nvme$subsystem", 00:42:24.045 "trtype": "$TEST_TRANSPORT", 00:42:24.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:24.045 "adrfam": "ipv4", 00:42:24.045 "trsvcid": "$NVMF_PORT", 00:42:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:24.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:24.045 "hdgst": ${hdgst:-false}, 00:42:24.045 "ddgst": ${ddgst:-false} 00:42:24.045 }, 00:42:24.045 "method": "bdev_nvme_attach_controller" 00:42:24.045 } 00:42:24.045 EOF 00:42:24.045 )") 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:42:24.045 [2024-10-09 11:22:43.797831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.797860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:42:24.045 11:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:42:24.045 "params": { 00:42:24.045 "name": "Nvme1", 00:42:24.045 "trtype": "tcp", 00:42:24.045 "traddr": "10.0.0.2", 00:42:24.045 "adrfam": "ipv4", 00:42:24.045 "trsvcid": "4420", 00:42:24.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:24.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:24.045 "hdgst": false, 00:42:24.045 "ddgst": false 00:42:24.045 }, 00:42:24.045 "method": "bdev_nvme_attach_controller" 00:42:24.045 }' 00:42:24.045 [2024-10-09 11:22:43.809793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.809801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.821792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.821799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.833792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.833799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.839383] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:42:24.045 [2024-10-09 11:22:43.839430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202584 ] 00:42:24.045 [2024-10-09 11:22:43.845790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.845797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.857791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.857797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.869791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.869799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.881804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.881811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.893791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.893797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.905791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.905798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.917791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.917799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.929790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.929798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.941791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.941798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.953790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.953797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.965791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.965799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.968890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:24.045 [2024-10-09 11:22:43.977791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.977798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.989791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:43.989798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:43.999733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.045 [2024-10-09 11:22:44.001795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:44.001804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:44.013793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:44.013803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:44.017206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.045 [2024-10-09 11:22:44.025791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:44.025799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.045 [2024-10-09 11:22:44.037797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.045 [2024-10-09 11:22:44.037813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.049796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.049808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.061793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.061803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.073792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.073800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.085804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.085822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.097795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.097805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.109793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.109804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.121792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.121800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.133791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.133798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.145791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.145798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.157792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.157802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.169793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.169805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.181797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.181812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 Running I/O for 5 seconds... 00:42:24.307 [2024-10-09 11:22:44.197351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.197366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.307 [2024-10-09 11:22:44.210815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.307 [2024-10-09 11:22:44.210832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.225083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.225100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.237693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.237709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.250910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.250925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.265131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.265147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.277851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.277866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.291000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.291016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.308 [2024-10-09 11:22:44.304913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.308 [2024-10-09 11:22:44.304929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.318111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.318126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.333440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.333456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.346427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.346442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.361431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.361447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.374418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.374433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.389519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.389534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.402234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.402249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.416870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.416885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.429858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.429874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.442360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.442376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.456951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.456966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.469846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.469861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.569 [2024-10-09 11:22:44.482594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.569 [2024-10-09 11:22:44.482609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.497228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.497244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.510024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.510038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.524904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.524919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.538113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.538128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.553433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.553448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.570 [2024-10-09 11:22:44.565981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.570 [2024-10-09 11:22:44.565996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.580885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.580900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.593798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.593814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.606433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.606448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.620591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.620608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.633937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.633951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.649010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.649025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.662090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.662105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.676855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.676870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.689781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.689797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.702523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.702538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.717193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.717209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.730078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.730092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.745368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.745383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.758359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.758374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.772554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.772569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.785322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.785337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.798188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.798202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.812718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.812733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.831 [2024-10-09 11:22:44.825770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.831 [2024-10-09 11:22:44.825785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.091 [2024-10-09 11:22:44.838264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.091 [2024-10-09 11:22:44.838279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.091 [2024-10-09 11:22:44.852927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.852942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.866255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.866270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.881393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.881408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.894051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.894065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.909023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.909038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.921842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.921857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.933517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.933532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.946339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.946354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.961266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.961282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.974443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.974458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:44.989535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:44.989551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.002175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.002190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.017098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.017113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.030117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.030132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.044877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.044900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.058197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.058211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.072806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.072821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.092 [2024-10-09 11:22:45.085661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.092 [2024-10-09 11:22:45.085676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.097932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.097947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.113230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.113245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.126565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.126579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.141520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.141535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.154511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.154526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.169026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.169042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.182022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.182037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.193789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.193804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 18690.00 IOPS, 146.02 MiB/s [2024-10-09T09:22:45.354Z] [2024-10-09 11:22:45.206889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.206904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.221064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.221079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.233987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.234002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.246341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.246355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.261242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.261257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.274732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.352 [2024-10-09 11:22:45.274747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.352 [2024-10-09 11:22:45.289418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.353 [2024-10-09 11:22:45.289433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.353 [2024-10-09 11:22:45.302579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.353 [2024-10-09 11:22:45.302597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.353 [2024-10-09 11:22:45.317189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.353 [2024-10-09 11:22:45.317203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.353 [2024-10-09 11:22:45.330190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.353 [2024-10-09 11:22:45.330205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.353 [2024-10-09 11:22:45.344816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.353 [2024-10-09 11:22:45.344831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.613 [2024-10-09 11:22:45.357435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.613 [2024-10-09 11:22:45.357450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.613 [2024-10-09 11:22:45.370278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.613 [2024-10-09 11:22:45.370293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.613 [2024-10-09 11:22:45.385385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.385399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.397862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.397876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.410431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.410445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.425239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.425254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.438286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.438300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.453314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.453329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.465958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.465973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.480422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.480437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.493607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.493622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.506100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.506114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.521292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.521307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.534374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.534388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.548755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.548770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.561596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.561615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.573634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.573650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.586366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.586380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.601073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.601089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.614 [2024-10-09 11:22:45.613692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.614 [2024-10-09 11:22:45.613707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.626230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.626245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.641446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.641461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.654345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.654360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.669177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.669191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.682387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.682402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.696969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.696984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.709733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.709749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.722460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.722479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.737592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.737607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.750461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.750481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.765471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.765487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.778160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.778175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.793519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.793534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.806664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.806679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.821667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.821682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.834412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.834427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.848737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.848753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.861321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.861337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.874 [2024-10-09 11:22:45.873605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.874 [2024-10-09 11:22:45.873622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.886232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.886247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.900851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.900866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.913558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.913573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.926693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.926708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.940681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.940697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.953267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.953283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.965987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.966002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.978551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.978566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:45.992798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:45.992813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.134 [2024-10-09 11:22:46.005937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.134 [2024-10-09 11:22:46.005952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.018591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.018606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.033279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.033295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.045986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.046001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.061079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.061094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.073899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.073914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.086639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.086654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.100950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.100966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.114051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.114066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.135 [2024-10-09 11:22:46.126782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.135 [2024-10-09 11:22:46.126797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.141180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.141196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.154221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.154236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.169200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.169216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.181992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.182007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.194093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.194107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 18756.00 IOPS, 146.53 MiB/s [2024-10-09T09:22:46.397Z] [2024-10-09 11:22:46.208744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.208760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.221773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.221788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.233798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.233814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.247013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.247028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.261155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.261170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.274139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.274155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.288715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.288730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.301265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.301280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.314048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.314067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.329166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.329181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.342541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.342555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.357200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.357215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.369837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.369853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.395 [2024-10-09 11:22:46.383143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.395 [2024-10-09 11:22:46.383158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.397032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.397047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.410498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.410513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.425150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.425165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.438373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.438388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.452836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.452851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.466357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.466372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.481248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.481263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.494103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.494118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.509160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.509176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.522199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.522213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.537324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.537340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.549894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.549909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.562827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.562842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.577385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.577404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.590904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.590919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.604789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.604804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.618332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.618346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.633153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.633167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.656 [2024-10-09 11:22:46.646212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.656 [2024-10-09 11:22:46.646226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.660562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.660578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.673593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.673609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.685943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.685958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.701180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.701195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.714126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.714141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.729089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.729104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.742273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.742288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.757063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.757078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.769961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.769976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.785589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.785605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.917 [2024-10-09 11:22:46.797992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.917 [2024-10-09 11:22:46.798006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.812618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.812632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.825855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.825870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.838109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.838128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.852743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.852758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.865957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.865973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.878316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.878330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.893608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.893623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.918 [2024-10-09 11:22:46.906514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.918 [2024-10-09 11:22:46.906529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.920745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.920760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.933028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.933043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.945590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.945605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.958152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.958167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.973511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.973526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:46.986654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:46.986669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.000800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:47.000815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.013838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:47.013853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.026572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:47.026587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.041310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:47.041324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.053866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.178 [2024-10-09 11:22:47.053881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.178 [2024-10-09 11:22:47.066471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.066485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.081000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.081015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.094180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.094197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.109178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.109193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.121860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.121875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.133192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.133207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.146651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.146665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.160981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.160996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.179 [2024-10-09 11:22:47.174066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.179 [2024-10-09 11:22:47.174080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.188872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.188887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 18726.67 IOPS, 146.30 MiB/s [2024-10-09T09:22:47.441Z] [2024-10-09 11:22:47.202330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.202344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.217216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.217231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.230686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.230701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.245244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.245259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.258092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.258106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.273291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.273306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.285919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.285934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.298864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.298879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.312816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.312832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.325542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.325557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.338312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.338326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.352728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.352743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.365624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.365639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.378656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.378671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.393040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.393055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.405974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.405988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.420696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.420711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.439 [2024-10-09 11:22:47.433713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.439 [2024-10-09 11:22:47.433728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.700 [2024-10-09 11:22:47.446089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.700 [2024-10-09 11:22:47.446103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.700 [2024-10-09 11:22:47.461032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.700 [2024-10-09 11:22:47.461047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.700 [2024-10-09 11:22:47.473726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.700 [2024-10-09 11:22:47.473742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.700 [2024-10-09 11:22:47.485916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.700 [2024-10-09 11:22:47.485931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.700 [2024-10-09 11:22:47.498652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.700 [2024-10-09 11:22:47.498667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.513039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.513054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.525419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.525434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.538283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.538298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.552762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.552777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.566011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.566026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.580512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.580528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.593456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.593476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.606420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.606435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.621090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.621107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.634532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.634547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.649278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.649293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.662139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.662154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.677059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.677075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.701 [2024-10-09 11:22:47.689831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.701 [2024-10-09 11:22:47.689847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.702662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.702677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.717482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.717498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.730315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.730329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.745388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.745404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.757964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.757979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.770361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.770376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.785397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.785412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.798226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.798241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.812783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.812798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.825723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.825738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.837708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.837723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.850425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.850440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.865580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.865596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.878626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.878641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.893311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.893327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.905746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.905761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.918202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.918218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.932637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.932652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.945925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.945940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.962 [2024-10-09 11:22:47.958167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.962 [2024-10-09 11:22:47.958183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.223 [2024-10-09 11:22:47.973181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.223 [2024-10-09 11:22:47.973197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.223 [2024-10-09 11:22:47.985779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.223 [2024-10-09 11:22:47.985794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.223 [2024-10-09 11:22:47.998273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.223 [2024-10-09 11:22:47.998287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.013109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.013124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.026170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.026185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.040868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.040883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.054125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.054140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.068905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.068922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.081971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.081986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.096834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.096849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.109958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.109977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.122783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.122799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.136913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.136928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.149660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.149675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.161969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.161984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.177136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.177152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.189723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.189738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 18757.50 IOPS, 146.54 MiB/s [2024-10-09T09:22:48.226Z] [2024-10-09 11:22:48.201610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.201625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.224 [2024-10-09 11:22:48.214425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.224 [2024-10-09 11:22:48.214440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.229031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.229047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.242260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.242276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.257140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.257155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.270247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.270262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.285423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.285438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.298462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.298481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.312762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.312777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.326602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.326618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.341186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.341202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.354427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.354441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.369124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.369143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.381991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.382006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.396848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.396863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.409722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.409737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.422126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.422140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.437580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.437595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.450218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.450233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.465249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.465264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.485 [2024-10-09 11:22:48.477931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.485 [2024-10-09 11:22:48.477946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.746 [2024-10-09 11:22:48.492847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.492863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.505759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.505775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.517480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.517495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.530812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.530827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.544834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.544849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.557876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.557890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.569804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.569819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.582727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.582742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.596846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.596861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.610231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.610245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.625133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.625152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.637997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.638012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.650315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.650330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.665100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.665115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.677964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.677978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.693231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.693246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.706078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.706093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.721042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.721057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.733784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.733800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.747 [2024-10-09 11:22:48.745835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.747 [2024-10-09 11:22:48.745851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.758956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.758971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.773560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.773576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.786397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.786411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.801786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.801801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.814263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.814277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.828772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.828786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.841816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.841831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.854107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.854122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.869231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.869246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.882249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.882264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.896872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.896887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.909833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.909848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.921973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.921988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.936913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.936928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.949581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.949596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.962643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.962658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.976660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.976675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:48.989990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:48.990005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.008 [2024-10-09 11:22:49.004746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.008 [2024-10-09 11:22:49.004761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.017632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.017648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.030460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.030479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.044924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.044939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.057910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.057925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.069939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.069953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.085291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.085306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.098260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.098275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.113233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.113248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.126079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.126095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.138344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.138359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.152903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.152918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.165700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.165715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.269 [2024-10-09 11:22:49.177405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.269 [2024-10-09 11:22:49.177420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.190264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.190279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 18768.80 IOPS, 146.63 MiB/s 00:42:29.270 Latency(us) 00:42:29.270 [2024-10-09T09:22:49.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:29.270 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:29.270 Nvme1n1 : 5.00 18778.03 146.70 0.00 0.00 6810.88 2586.52 11878.81 00:42:29.270 [2024-10-09T09:22:49.272Z] =================================================================================================================== 00:42:29.270 [2024-10-09T09:22:49.272Z] Total : 18778.03 146.70 0.00 0.00 6810.88 2586.52 11878.81 00:42:29.270 [2024-10-09 11:22:49.201798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.201813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.213797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.213809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.225802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.225815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.237798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.237811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.249796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.249807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.270 [2024-10-09 11:22:49.261792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.270 [2024-10-09 11:22:49.261804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.531 [2024-10-09 11:22:49.273792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.531 [2024-10-09 11:22:49.273803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.531 [2024-10-09 11:22:49.285794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.531 [2024-10-09 11:22:49.285804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.531 [2024-10-09 11:22:49.297791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.531 [2024-10-09 11:22:49.297799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2202584) - No such process 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2202584 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.531 delay0 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:29.531 11:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:29.793 [2024-10-09 11:22:49.573672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:36.374 Initializing NVMe Controllers 00:42:36.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:36.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:36.374 Initialization complete. Launching workers. 00:42:36.374 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 11232 00:42:36.374 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 11449, failed to submit 73 00:42:36.374 success 11309, unsuccessful 140, failed 0 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:36.374 rmmod nvme_tcp 00:42:36.374 rmmod nvme_fabrics 00:42:36.374 rmmod nvme_keyring 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2200537 ']' 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2200537 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2200537 ']' 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2200537 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2200537 00:42:36.374 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:36.375 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:36.375 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2200537' 00:42:36.375 killing process with pid 2200537 00:42:36.375 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2200537 00:42:36.375 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2200537 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:36.635 11:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.672 00:42:38.672 real 0m33.810s 00:42:38.672 user 0m43.362s 00:42:38.672 sys 0m11.736s 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:38.672 ************************************ 00:42:38.672 END TEST nvmf_zcopy 00:42:38.672 ************************************ 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:38.672 ************************************ 00:42:38.672 START TEST nvmf_nmic 00:42:38.672 ************************************ 00:42:38.672 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.934 * Looking for test storage... 00:42:38.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:38.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.934 --rc genhtml_branch_coverage=1 00:42:38.934 --rc genhtml_function_coverage=1 00:42:38.934 --rc genhtml_legend=1 00:42:38.934 --rc geninfo_all_blocks=1 00:42:38.934 --rc geninfo_unexecuted_blocks=1 00:42:38.934 00:42:38.934 ' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:38.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.934 --rc genhtml_branch_coverage=1 00:42:38.934 --rc genhtml_function_coverage=1 00:42:38.934 --rc genhtml_legend=1 00:42:38.934 --rc geninfo_all_blocks=1 00:42:38.934 --rc geninfo_unexecuted_blocks=1 00:42:38.934 00:42:38.934 ' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:38.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.934 --rc genhtml_branch_coverage=1 00:42:38.934 --rc genhtml_function_coverage=1 00:42:38.934 --rc genhtml_legend=1 00:42:38.934 --rc geninfo_all_blocks=1 00:42:38.934 --rc geninfo_unexecuted_blocks=1 00:42:38.934 00:42:38.934 ' 00:42:38.934 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:38.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.934 --rc genhtml_branch_coverage=1 00:42:38.934 --rc genhtml_function_coverage=1 00:42:38.934 --rc genhtml_legend=1 00:42:38.934 --rc geninfo_all_blocks=1 00:42:38.935 --rc geninfo_unexecuted_blocks=1 00:42:38.935 00:42:38.935 ' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:38.935 11:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:47.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:47.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:47.074 Found net devices under 0000:31:00.0: cvl_0_0 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:47.074 Found net devices under 0000:31:00.1: cvl_0_1 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:47.074 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:47.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:47.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:42:47.075 00:42:47.075 --- 10.0.0.2 ping statistics --- 00:42:47.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.075 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:47.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:47.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:42:47.075 00:42:47.075 --- 10.0.0.1 ping statistics --- 00:42:47.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:47.075 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2209022 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2209022 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2209022 ']' 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:47.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:47.075 [2024-10-09 11:23:05.919050] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:47.075 [2024-10-09 11:23:05.920034] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:42:47.075 [2024-10-09 11:23:05.920073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:47.075 [2024-10-09 11:23:06.056348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:42:47.075 [2024-10-09 11:23:06.087688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:47.075 [2024-10-09 11:23:06.106760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:47.075 [2024-10-09 11:23:06.106793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:47.075 [2024-10-09 11:23:06.106801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:47.075 [2024-10-09 11:23:06.106808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:47.075 [2024-10-09 11:23:06.106814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:47.075 [2024-10-09 11:23:06.108326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.075 [2024-10-09 11:23:06.108441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:47.075 [2024-10-09 11:23:06.108596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:47.075 [2024-10-09 11:23:06.108596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:47.075 [2024-10-09 11:23:06.156826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:47.075 [2024-10-09 11:23:06.156946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:47.075 [2024-10-09 11:23:06.157831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:47.075 [2024-10-09 11:23:06.158498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:47.075 [2024-10-09 11:23:06.158586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 [2024-10-09 11:23:06.729320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 Malloc0 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 [2024-10-09 11:23:06.789193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:47.075 test case1: single bdev can't be used in multiple subsystems 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.075 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.076 [2024-10-09 11:23:06.824946] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:47.076 [2024-10-09 11:23:06.824965] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:47.076 [2024-10-09 11:23:06.824972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:47.076 request: 00:42:47.076 { 00:42:47.076 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:47.076 "namespace": { 00:42:47.076 "bdev_name": "Malloc0", 00:42:47.076 "no_auto_visible": false 00:42:47.076 }, 00:42:47.076 "method": "nvmf_subsystem_add_ns", 00:42:47.076 "req_id": 1 00:42:47.076 } 00:42:47.076 Got JSON-RPC error response 00:42:47.076 response: 00:42:47.076 { 00:42:47.076 "code": -32602, 00:42:47.076 "message": "Invalid parameters" 00:42:47.076 } 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:47.076 Adding namespace failed - expected result. 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:47.076 test case2: host connect to nvmf target in multiple paths 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:47.076 [2024-10-09 11:23:06.837056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:47.076 11:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:47.336 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:47.908 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:47.908 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:42:47.908 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:42:47.908 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:42:47.908 11:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:42:49.821 11:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:49.821 [global] 00:42:49.821 thread=1 00:42:49.821 invalidate=1 00:42:49.821 rw=write 00:42:49.821 time_based=1 00:42:49.821 runtime=1 00:42:49.821 ioengine=libaio 00:42:49.821 direct=1 00:42:49.821 bs=4096 00:42:49.821 iodepth=1 00:42:49.821 norandommap=0 00:42:49.821 numjobs=1 00:42:49.821 00:42:49.821 verify_dump=1 00:42:49.821 verify_backlog=512 00:42:49.821 verify_state_save=0 00:42:49.821 do_verify=1 00:42:49.821 verify=crc32c-intel 00:42:49.821 [job0] 00:42:49.821 filename=/dev/nvme0n1 00:42:49.821 Could not set queue depth (nvme0n1) 00:42:50.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:50.082 fio-3.35 00:42:50.082 Starting 1 thread 00:42:51.468 00:42:51.468 job0: (groupid=0, jobs=1): err= 0: pid=2210110: Wed Oct 9 11:23:11 2024 00:42:51.468 read: IOPS=553, BW=2214KiB/s (2267kB/s)(2216KiB/1001msec) 00:42:51.468 slat (nsec): min=6315, max=49025, avg=23814.81, stdev=8616.71 00:42:51.468 clat (usec): min=247, max=928, avg=687.85, stdev=89.26 00:42:51.468 lat (usec): min=254, max=955, avg=711.66, stdev=93.30 00:42:51.468 clat percentiles (usec): 00:42:51.468 | 1.00th=[ 461], 5.00th=[ 537], 10.00th=[ 562], 20.00th=[ 627], 00:42:51.468 | 30.00th=[ 652], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 734], 00:42:51.468 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 791], 95.00th=[ 807], 00:42:51.468 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 930], 99.95th=[ 930], 00:42:51.468 | 99.99th=[ 930] 00:42:51.468 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:42:51.468 slat (usec): min=9, max=29807, avg=60.88, stdev=930.54 00:42:51.468 clat (usec): min=148, max=738, avg=517.44, stdev=101.57 00:42:51.468 lat (usec): min=159, max=30433, avg=578.32, stdev=939.93 00:42:51.468 clat percentiles (usec): 00:42:51.468 | 1.00th=[ 269], 5.00th=[ 338], 10.00th=[ 375], 20.00th=[ 437], 00:42:51.468 | 30.00th=[ 474], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 553], 00:42:51.468 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:42:51.468 | 99.00th=[ 709], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 742], 00:42:51.468 | 99.99th=[ 742] 00:42:51.468 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:51.468 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:51.468 lat (usec) : 250=0.44%, 500=27.19%, 750=62.55%, 1000=9.82% 00:42:51.468 cpu : usr=3.40%, sys=5.70%, ctx=1582, majf=0, minf=1 00:42:51.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.468 issued rwts: total=554,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:51.468 00:42:51.468 Run status group 0 (all jobs): 00:42:51.468 READ: bw=2214KiB/s (2267kB/s), 2214KiB/s-2214KiB/s (2267kB/s-2267kB/s), io=2216KiB (2269kB), run=1001-1001msec 00:42:51.468 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:42:51.468 00:42:51.468 Disk stats (read/write): 00:42:51.468 nvme0n1: ios=537/878, merge=0/0, ticks=1281/355, in_queue=1636, util=98.40% 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:51.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:51.468 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:51.469 rmmod nvme_tcp 00:42:51.469 rmmod nvme_fabrics 00:42:51.469 rmmod nvme_keyring 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2209022 ']' 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2209022 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2209022 ']' 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2209022 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.469 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2209022 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2209022' 00:42:51.729 killing process with pid 2209022 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2209022 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2209022 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:51.729 11:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.301 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:54.301 00:42:54.301 real 0m15.113s 00:42:54.301 user 0m31.422s 00:42:54.301 sys 0m7.242s 00:42:54.301 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:54.302 ************************************ 00:42:54.302 END TEST nvmf_nmic 00:42:54.302 ************************************ 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:54.302 ************************************ 00:42:54.302 START TEST nvmf_fio_target 00:42:54.302 ************************************ 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:54.302 * Looking for test storage... 00:42:54.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:54.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.302 --rc genhtml_branch_coverage=1 00:42:54.302 --rc genhtml_function_coverage=1 00:42:54.302 --rc genhtml_legend=1 00:42:54.302 --rc geninfo_all_blocks=1 00:42:54.302 --rc geninfo_unexecuted_blocks=1 00:42:54.302 00:42:54.302 ' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:54.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.302 --rc genhtml_branch_coverage=1 00:42:54.302 --rc genhtml_function_coverage=1 00:42:54.302 --rc genhtml_legend=1 00:42:54.302 --rc geninfo_all_blocks=1 00:42:54.302 --rc geninfo_unexecuted_blocks=1 00:42:54.302 00:42:54.302 ' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:54.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.302 --rc genhtml_branch_coverage=1 00:42:54.302 --rc genhtml_function_coverage=1 00:42:54.302 --rc genhtml_legend=1 00:42:54.302 --rc geninfo_all_blocks=1 00:42:54.302 --rc geninfo_unexecuted_blocks=1 00:42:54.302 00:42:54.302 ' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:54.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:54.302 --rc genhtml_branch_coverage=1 00:42:54.302 --rc genhtml_function_coverage=1 00:42:54.302 --rc genhtml_legend=1 00:42:54.302 --rc geninfo_all_blocks=1 00:42:54.302 --rc geninfo_unexecuted_blocks=1 00:42:54.302 00:42:54.302 ' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.302 11:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:54.303 11:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:02.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.443 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:02.444 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:02.444 Found net devices under 0000:31:00.0: cvl_0_0 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:02.444 Found net devices under 0000:31:00.1: cvl_0_1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:02.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:02.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:43:02.444 00:43:02.444 --- 10.0.0.2 ping statistics --- 00:43:02.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.444 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:02.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:02.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:43:02.444 00:43:02.444 --- 10.0.0.1 ping statistics --- 00:43:02.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:02.444 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2214579 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2214579 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2214579 ']' 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:02.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:02.444 11:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.445 [2024-10-09 11:23:21.420171] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:02.445 [2024-10-09 11:23:21.421147] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:43:02.445 [2024-10-09 11:23:21.421183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:02.445 [2024-10-09 11:23:21.560220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:02.445 [2024-10-09 11:23:21.591678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:02.445 [2024-10-09 11:23:21.609234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:02.445 [2024-10-09 11:23:21.609265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:02.445 [2024-10-09 11:23:21.609272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:02.445 [2024-10-09 11:23:21.609279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:02.445 [2024-10-09 11:23:21.609285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:02.445 [2024-10-09 11:23:21.610774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:02.445 [2024-10-09 11:23:21.610790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:02.445 [2024-10-09 11:23:21.610925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.445 [2024-10-09 11:23:21.610927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:02.445 [2024-10-09 11:23:21.659013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:02.445 [2024-10-09 11:23:21.659248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:02.445 [2024-10-09 11:23:21.660146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:02.445 [2024-10-09 11:23:21.660538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:02.445 [2024-10-09 11:23:21.660673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:02.445 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:02.707 [2024-10-09 11:23:22.467858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:02.707 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:02.968 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:02.968 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:02.968 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:02.968 11:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.229 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:03.229 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.491 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:03.491 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:03.491 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:03.752 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:03.752 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:04.013 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:04.013 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:04.013 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:04.013 11:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:04.275 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:04.536 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:04.536 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:04.536 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:04.536 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:04.798 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:05.059 [2024-10-09 11:23:24.823633] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.059 11:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:05.059 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:05.319 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:43:05.892 11:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:43:07.804 11:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:07.804 [global] 00:43:07.804 thread=1 00:43:07.804 invalidate=1 00:43:07.804 rw=write 00:43:07.804 time_based=1 00:43:07.804 runtime=1 00:43:07.804 ioengine=libaio 00:43:07.804 direct=1 00:43:07.804 bs=4096 00:43:07.804 iodepth=1 00:43:07.804 norandommap=0 00:43:07.804 numjobs=1 00:43:07.804 00:43:07.804 verify_dump=1 00:43:07.804 verify_backlog=512 00:43:07.804 verify_state_save=0 00:43:07.804 do_verify=1 00:43:07.804 verify=crc32c-intel 00:43:07.804 [job0] 00:43:07.804 filename=/dev/nvme0n1 00:43:07.804 [job1] 00:43:07.804 filename=/dev/nvme0n2 00:43:07.804 [job2] 00:43:07.804 filename=/dev/nvme0n3 00:43:07.804 [job3] 00:43:07.804 filename=/dev/nvme0n4 00:43:07.804 Could not set queue depth (nvme0n1) 00:43:07.804 Could not set queue depth (nvme0n2) 00:43:07.804 Could not set queue depth (nvme0n3) 00:43:07.804 Could not set queue depth (nvme0n4) 00:43:08.378 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.378 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.378 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.378 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:08.378 fio-3.35 00:43:08.378 Starting 4 threads 00:43:09.765 00:43:09.765 job0: (groupid=0, jobs=1): err= 0: pid=2216047: Wed Oct 9 11:23:29 2024 00:43:09.765 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:43:09.765 slat (nsec): min=26089, max=26757, avg=26375.35, stdev=200.00 00:43:09.765 clat (usec): min=40898, max=42217, avg=41857.05, stdev=362.15 00:43:09.765 lat (usec): min=40925, max=42243, avg=41883.43, stdev=362.06 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:43:09.765 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:09.765 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:09.765 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:09.765 | 99.99th=[42206] 00:43:09.765 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:43:09.765 slat (nsec): min=10000, max=68501, avg=29034.08, stdev=11041.28 00:43:09.765 clat (usec): min=277, max=1044, avg=598.33, stdev=143.72 00:43:09.765 lat (usec): min=288, max=1078, avg=627.37, stdev=147.21 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 420], 20.00th=[ 469], 00:43:09.765 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 627], 00:43:09.765 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 857], 00:43:09.765 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1045], 99.95th=[ 1045], 00:43:09.765 | 99.99th=[ 1045] 00:43:09.765 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.765 lat (usec) : 500=26.09%, 750=55.95%, 1000=14.56% 00:43:09.765 lat (msec) : 2=0.19%, 50=3.21% 00:43:09.765 cpu : usr=0.97%, sys=1.06%, ctx=531, majf=0, minf=1 00:43:09.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.765 job1: (groupid=0, jobs=1): err= 0: pid=2216061: Wed Oct 9 11:23:29 2024 00:43:09.765 read: IOPS=19, BW=78.2KiB/s (80.1kB/s)(80.0KiB/1023msec) 00:43:09.765 slat (nsec): min=26900, max=27770, avg=27086.80, stdev=186.00 00:43:09.765 clat (usec): min=885, max=41072, avg=38945.49, stdev=8958.83 00:43:09.765 lat (usec): min=912, max=41099, avg=38972.58, stdev=8958.82 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[40633], 20.00th=[40633], 00:43:09.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:09.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:09.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:09.765 | 99.99th=[41157] 00:43:09.765 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:43:09.765 slat (nsec): min=9044, max=72088, avg=31444.38, stdev=10317.80 00:43:09.765 clat (usec): min=152, max=744, avg=434.63, stdev=125.40 00:43:09.765 lat (usec): min=163, max=800, avg=466.08, stdev=128.26 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[ 192], 5.00th=[ 237], 10.00th=[ 277], 20.00th=[ 326], 00:43:09.765 | 30.00th=[ 351], 40.00th=[ 400], 50.00th=[ 437], 60.00th=[ 461], 00:43:09.765 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 660], 00:43:09.765 | 99.00th=[ 709], 99.50th=[ 717], 99.90th=[ 742], 99.95th=[ 742], 00:43:09.765 | 99.99th=[ 742] 00:43:09.765 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.765 lat (usec) : 250=6.95%, 500=59.77%, 750=29.51%, 1000=0.19% 00:43:09.765 lat (msec) : 50=3.57% 00:43:09.765 cpu : usr=1.27%, sys=1.76%, ctx=534, majf=0, minf=1 00:43:09.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.765 job2: (groupid=0, jobs=1): err= 0: pid=2216078: Wed Oct 9 11:23:29 2024 00:43:09.765 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:09.765 slat (nsec): min=6881, max=62007, avg=27547.55, stdev=2972.43 00:43:09.765 clat (usec): min=469, max=1181, avg=944.45, stdev=82.01 00:43:09.765 lat (usec): min=497, max=1209, avg=972.00, stdev=82.62 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[ 635], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 898], 00:43:09.765 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:43:09.765 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1045], 00:43:09.765 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1188], 99.95th=[ 1188], 00:43:09.765 | 99.99th=[ 1188] 00:43:09.765 write: IOPS=793, BW=3173KiB/s (3249kB/s)(3176KiB/1001msec); 0 zone resets 00:43:09.765 slat (nsec): min=9392, max=68776, avg=33720.03, stdev=9594.79 00:43:09.765 clat (usec): min=175, max=975, avg=585.44, stdev=136.38 00:43:09.765 lat (usec): min=187, max=1010, avg=619.16, stdev=139.90 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[ 265], 5.00th=[ 347], 10.00th=[ 404], 20.00th=[ 469], 00:43:09.765 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 627], 00:43:09.765 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:43:09.765 | 99.00th=[ 873], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:43:09.765 | 99.99th=[ 979] 00:43:09.765 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.765 lat (usec) : 250=0.54%, 500=15.24%, 750=39.74%, 1000=37.90% 00:43:09.765 lat (msec) : 2=6.58% 00:43:09.765 cpu : usr=2.60%, sys=5.60%, ctx=1308, majf=0, minf=1 00:43:09.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.765 issued rwts: total=512,794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.765 job3: (groupid=0, jobs=1): err= 0: pid=2216084: Wed Oct 9 11:23:29 2024 00:43:09.765 read: IOPS=14, BW=59.6KiB/s (61.1kB/s)(60.0KiB/1006msec) 00:43:09.765 slat (nsec): min=26432, max=27187, avg=26757.93, stdev=200.79 00:43:09.765 clat (usec): min=41481, max=42226, avg=41925.75, stdev=167.23 00:43:09.765 lat (usec): min=41508, max=42253, avg=41952.51, stdev=167.17 00:43:09.765 clat percentiles (usec): 00:43:09.765 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:43:09.765 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:09.765 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:09.765 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:09.765 | 99.99th=[42206] 00:43:09.765 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:43:09.765 slat (nsec): min=10396, max=55305, avg=33362.20, stdev=8559.71 00:43:09.765 clat (usec): min=319, max=3403, avg=693.64, stdev=191.10 00:43:09.765 lat (usec): min=354, max=3438, avg=727.00, stdev=192.89 00:43:09.765 clat percentiles (usec): 00:43:09.766 | 1.00th=[ 351], 5.00th=[ 457], 10.00th=[ 506], 20.00th=[ 562], 00:43:09.766 | 30.00th=[ 603], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 725], 00:43:09.766 | 70.00th=[ 766], 80.00th=[ 824], 90.00th=[ 914], 95.00th=[ 938], 00:43:09.766 | 99.00th=[ 1004], 99.50th=[ 1020], 99.90th=[ 3392], 99.95th=[ 3392], 00:43:09.766 | 99.99th=[ 3392] 00:43:09.766 bw ( KiB/s): min= 4096, max= 4096, per=45.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.766 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.766 lat (usec) : 500=9.49%, 750=55.60%, 1000=30.93% 00:43:09.766 lat (msec) : 2=0.95%, 4=0.19%, 50=2.85% 00:43:09.766 cpu : usr=0.70%, sys=1.69%, ctx=530, majf=0, minf=1 00:43:09.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.766 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.766 00:43:09.766 Run status group 0 (all jobs): 00:43:09.766 READ: bw=2176KiB/s (2228kB/s), 59.6KiB/s-2046KiB/s (61.1kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1037msec 00:43:09.766 WRITE: bw=8987KiB/s (9203kB/s), 1975KiB/s-3173KiB/s (2022kB/s-3249kB/s), io=9320KiB (9544kB), run=1001-1037msec 00:43:09.766 00:43:09.766 Disk stats (read/write): 00:43:09.766 nvme0n1: ios=39/512, merge=0/0, ticks=1490/298, in_queue=1788, util=96.29% 00:43:09.766 nvme0n2: ios=69/512, merge=0/0, ticks=768/171, in_queue=939, util=100.00% 00:43:09.766 nvme0n3: ios=534/519, merge=0/0, ticks=1403/244, in_queue=1647, util=96.72% 00:43:09.766 nvme0n4: ios=34/512, merge=0/0, ticks=1366/341, in_queue=1707, util=96.68% 00:43:09.766 11:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:09.766 [global] 00:43:09.766 thread=1 00:43:09.766 invalidate=1 00:43:09.766 rw=randwrite 00:43:09.766 time_based=1 00:43:09.766 runtime=1 00:43:09.766 ioengine=libaio 00:43:09.766 direct=1 00:43:09.766 bs=4096 00:43:09.766 iodepth=1 00:43:09.766 norandommap=0 00:43:09.766 numjobs=1 00:43:09.766 00:43:09.766 verify_dump=1 00:43:09.766 verify_backlog=512 00:43:09.766 verify_state_save=0 00:43:09.766 do_verify=1 00:43:09.766 verify=crc32c-intel 00:43:09.766 [job0] 00:43:09.766 filename=/dev/nvme0n1 00:43:09.766 [job1] 00:43:09.766 filename=/dev/nvme0n2 00:43:09.766 [job2] 00:43:09.766 filename=/dev/nvme0n3 00:43:09.766 [job3] 00:43:09.766 filename=/dev/nvme0n4 00:43:09.766 Could not set queue depth (nvme0n1) 00:43:09.766 Could not set queue depth (nvme0n2) 00:43:09.766 Could not set queue depth (nvme0n3) 00:43:09.766 Could not set queue depth (nvme0n4) 00:43:10.026 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.026 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.026 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.026 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.026 fio-3.35 00:43:10.026 Starting 4 threads 00:43:11.409 00:43:11.409 job0: (groupid=0, jobs=1): err= 0: pid=2216512: Wed Oct 9 11:23:31 2024 00:43:11.409 read: IOPS=550, BW=2202KiB/s (2255kB/s)(2204KiB/1001msec) 00:43:11.409 slat (nsec): min=7432, max=46293, avg=26143.01, stdev=3499.29 00:43:11.409 clat (usec): min=266, max=1181, avg=912.78, stdev=129.56 00:43:11.409 lat (usec): min=292, max=1207, avg=938.92, stdev=129.68 00:43:11.409 clat percentiles (usec): 00:43:11.409 | 1.00th=[ 494], 5.00th=[ 652], 10.00th=[ 750], 20.00th=[ 816], 00:43:11.409 | 30.00th=[ 873], 40.00th=[ 914], 50.00th=[ 947], 60.00th=[ 971], 00:43:11.409 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:43:11.409 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:43:11.409 | 99.99th=[ 1188] 00:43:11.409 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:43:11.409 slat (nsec): min=9361, max=87394, avg=29919.68, stdev=9232.40 00:43:11.409 clat (usec): min=158, max=1130, avg=430.15, stdev=129.62 00:43:11.409 lat (usec): min=192, max=1164, avg=460.07, stdev=131.36 00:43:11.409 clat percentiles (usec): 00:43:11.409 | 1.00th=[ 210], 5.00th=[ 258], 10.00th=[ 289], 20.00th=[ 314], 00:43:11.409 | 30.00th=[ 338], 40.00th=[ 379], 50.00th=[ 424], 60.00th=[ 449], 00:43:11.409 | 70.00th=[ 486], 80.00th=[ 545], 90.00th=[ 603], 95.00th=[ 660], 00:43:11.409 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 906], 99.95th=[ 1139], 00:43:11.409 | 99.99th=[ 1139] 00:43:11.409 bw ( KiB/s): min= 4096, max= 4096, per=37.57%, avg=4096.00, stdev= 0.00, samples=2 00:43:11.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:43:11.409 lat (usec) : 250=2.79%, 500=44.89%, 750=19.81%, 1000=24.06% 00:43:11.409 lat (msec) : 2=8.44% 00:43:11.409 cpu : usr=2.00%, sys=5.00%, ctx=1579, majf=0, minf=1 00:43:11.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.409 issued rwts: total=551,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.409 job1: (groupid=0, jobs=1): err= 0: pid=2216527: Wed Oct 9 11:23:31 2024 00:43:11.409 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:11.409 slat (nsec): min=8655, max=59788, avg=25690.74, stdev=3320.24 00:43:11.409 clat (usec): min=846, max=1708, avg=1169.63, stdev=97.11 00:43:11.409 lat (usec): min=872, max=1733, avg=1195.32, stdev=96.96 00:43:11.409 clat percentiles (usec): 00:43:11.409 | 1.00th=[ 898], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1106], 00:43:11.409 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1172], 60.00th=[ 1188], 00:43:11.409 | 70.00th=[ 1205], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:43:11.409 | 99.00th=[ 1500], 99.50th=[ 1631], 99.90th=[ 1713], 99.95th=[ 1713], 00:43:11.409 | 99.99th=[ 1713] 00:43:11.409 write: IOPS=591, BW=2366KiB/s (2422kB/s)(2368KiB/1001msec); 0 zone resets 00:43:11.409 slat (nsec): min=9332, max=52769, avg=27002.15, stdev=9487.96 00:43:11.409 clat (usec): min=171, max=2027, avg=613.63, stdev=146.81 00:43:11.409 lat (usec): min=182, max=2060, avg=640.63, stdev=150.12 00:43:11.409 clat percentiles (usec): 00:43:11.409 | 1.00th=[ 245], 5.00th=[ 379], 10.00th=[ 433], 20.00th=[ 494], 00:43:11.409 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:43:11.409 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 799], 00:43:11.410 | 99.00th=[ 947], 99.50th=[ 996], 99.90th=[ 2024], 99.95th=[ 2024], 00:43:11.410 | 99.99th=[ 2024] 00:43:11.410 bw ( KiB/s): min= 4096, max= 4096, per=37.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:11.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:11.410 lat (usec) : 250=0.54%, 500=11.14%, 750=35.24%, 1000=8.15% 00:43:11.410 lat (msec) : 2=44.84%, 4=0.09% 00:43:11.410 cpu : usr=1.60%, sys=3.10%, ctx=1104, majf=0, minf=1 00:43:11.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 issued rwts: total=512,592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.410 job2: (groupid=0, jobs=1): err= 0: pid=2216538: Wed Oct 9 11:23:31 2024 00:43:11.410 read: IOPS=71, BW=287KiB/s (294kB/s)(288KiB/1002msec) 00:43:11.410 slat (nsec): min=9147, max=37938, avg=25378.12, stdev=2506.59 00:43:11.410 clat (usec): min=520, max=42032, avg=10006.22, stdev=17188.74 00:43:11.410 lat (usec): min=545, max=42058, avg=10031.60, stdev=17189.06 00:43:11.410 clat percentiles (usec): 00:43:11.410 | 1.00th=[ 523], 5.00th=[ 660], 10.00th=[ 709], 20.00th=[ 758], 00:43:11.410 | 30.00th=[ 848], 40.00th=[ 898], 50.00th=[ 963], 60.00th=[ 988], 00:43:11.410 | 70.00th=[ 1037], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:11.410 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:11.410 | 99.99th=[42206] 00:43:11.410 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:43:11.410 slat (nsec): min=9373, max=65173, avg=30265.97, stdev=7120.16 00:43:11.410 clat (usec): min=188, max=1660, avg=507.10, stdev=179.17 00:43:11.410 lat (usec): min=221, max=1679, avg=537.36, stdev=180.64 00:43:11.410 clat percentiles (usec): 00:43:11.410 | 1.00th=[ 219], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 347], 00:43:11.410 | 30.00th=[ 371], 40.00th=[ 445], 50.00th=[ 486], 60.00th=[ 537], 00:43:11.410 | 70.00th=[ 586], 80.00th=[ 652], 90.00th=[ 734], 95.00th=[ 832], 00:43:11.410 | 99.00th=[ 963], 99.50th=[ 1012], 99.90th=[ 1663], 99.95th=[ 1663], 00:43:11.410 | 99.99th=[ 1663] 00:43:11.410 bw ( KiB/s): min= 4096, max= 4096, per=37.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:11.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:11.410 lat (usec) : 250=3.08%, 500=44.69%, 750=35.10%, 1000=11.99% 00:43:11.410 lat (msec) : 2=2.40%, 50=2.74% 00:43:11.410 cpu : usr=0.90%, sys=1.70%, ctx=584, majf=0, minf=2 00:43:11.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.410 job3: (groupid=0, jobs=1): err= 0: pid=2216540: Wed Oct 9 11:23:31 2024 00:43:11.410 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:11.410 slat (nsec): min=7838, max=53370, avg=25300.88, stdev=3882.37 00:43:11.410 clat (usec): min=849, max=1637, avg=1193.64, stdev=106.47 00:43:11.410 lat (usec): min=875, max=1663, avg=1218.94, stdev=106.87 00:43:11.410 clat percentiles (usec): 00:43:11.410 | 1.00th=[ 889], 5.00th=[ 1020], 10.00th=[ 1074], 20.00th=[ 1123], 00:43:11.410 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1221], 00:43:11.410 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1303], 95.00th=[ 1336], 00:43:11.410 | 99.00th=[ 1516], 99.50th=[ 1598], 99.90th=[ 1631], 99.95th=[ 1631], 00:43:11.410 | 99.99th=[ 1631] 00:43:11.410 write: IOPS=602, BW=2410KiB/s (2467kB/s)(2412KiB/1001msec); 0 zone resets 00:43:11.410 slat (nsec): min=9342, max=51918, avg=27350.75, stdev=9725.50 00:43:11.410 clat (usec): min=133, max=1802, avg=581.69, stdev=154.53 00:43:11.410 lat (usec): min=142, max=1835, avg=609.04, stdev=159.11 00:43:11.410 clat percentiles (usec): 00:43:11.410 | 1.00th=[ 237], 5.00th=[ 343], 10.00th=[ 379], 20.00th=[ 445], 00:43:11.410 | 30.00th=[ 506], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:43:11.410 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 783], 00:43:11.410 | 99.00th=[ 865], 99.50th=[ 922], 99.90th=[ 1811], 99.95th=[ 1811], 00:43:11.410 | 99.99th=[ 1811] 00:43:11.410 bw ( KiB/s): min= 4096, max= 4096, per=37.57%, avg=4096.00, stdev= 0.00, samples=1 00:43:11.410 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:11.410 lat (usec) : 250=0.90%, 500=14.62%, 750=32.74%, 1000=7.26% 00:43:11.410 lat (msec) : 2=44.48% 00:43:11.410 cpu : usr=1.50%, sys=3.20%, ctx=1115, majf=0, minf=1 00:43:11.410 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:11.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:11.410 issued rwts: total=512,603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:11.410 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:11.410 00:43:11.410 Run status group 0 (all jobs): 00:43:11.410 READ: bw=6575KiB/s (6733kB/s), 287KiB/s-2202KiB/s (294kB/s-2255kB/s), io=6588KiB (6746kB), run=1001-1002msec 00:43:11.410 WRITE: bw=10.6MiB/s (11.2MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=10.7MiB (11.2MB), run=1001-1002msec 00:43:11.410 00:43:11.410 Disk stats (read/write): 00:43:11.410 nvme0n1: ios=538/744, merge=0/0, ticks=1440/316, in_queue=1756, util=96.59% 00:43:11.410 nvme0n2: ios=459/512, merge=0/0, ticks=577/296, in_queue=873, util=92.25% 00:43:11.410 nvme0n3: ios=68/512, merge=0/0, ticks=549/248, in_queue=797, util=88.38% 00:43:11.410 nvme0n4: ios=428/512, merge=0/0, ticks=493/282, in_queue=775, util=89.52% 00:43:11.410 11:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:11.410 [global] 00:43:11.410 thread=1 00:43:11.410 invalidate=1 00:43:11.410 rw=write 00:43:11.410 time_based=1 00:43:11.410 runtime=1 00:43:11.410 ioengine=libaio 00:43:11.410 direct=1 00:43:11.410 bs=4096 00:43:11.410 iodepth=128 00:43:11.410 norandommap=0 00:43:11.410 numjobs=1 00:43:11.410 00:43:11.410 verify_dump=1 00:43:11.410 verify_backlog=512 00:43:11.410 verify_state_save=0 00:43:11.410 do_verify=1 00:43:11.410 verify=crc32c-intel 00:43:11.410 [job0] 00:43:11.410 filename=/dev/nvme0n1 00:43:11.410 [job1] 00:43:11.410 filename=/dev/nvme0n2 00:43:11.410 [job2] 00:43:11.410 filename=/dev/nvme0n3 00:43:11.410 [job3] 00:43:11.410 filename=/dev/nvme0n4 00:43:11.410 Could not set queue depth (nvme0n1) 00:43:11.410 Could not set queue depth (nvme0n2) 00:43:11.410 Could not set queue depth (nvme0n3) 00:43:11.410 Could not set queue depth (nvme0n4) 00:43:11.670 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.670 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.670 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.670 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.670 fio-3.35 00:43:11.670 Starting 4 threads 00:43:13.056 00:43:13.056 job0: (groupid=0, jobs=1): err= 0: pid=2216954: Wed Oct 9 11:23:32 2024 00:43:13.056 read: IOPS=6463, BW=25.2MiB/s (26.5MB/s)(25.4MiB/1007msec) 00:43:13.056 slat (nsec): min=1024, max=9106.5k, avg=68942.50, stdev=535480.69 00:43:13.056 clat (usec): min=1775, max=31291, avg=9588.93, stdev=4294.73 00:43:13.056 lat (usec): min=1779, max=31296, avg=9657.88, stdev=4327.35 00:43:13.056 clat percentiles (usec): 00:43:13.056 | 1.00th=[ 2769], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6128], 00:43:13.056 | 30.00th=[ 6587], 40.00th=[ 7504], 50.00th=[ 8455], 60.00th=[ 9503], 00:43:13.056 | 70.00th=[11076], 80.00th=[13304], 90.00th=[15533], 95.00th=[17957], 00:43:13.056 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23725], 99.95th=[24249], 00:43:13.056 | 99.99th=[31327] 00:43:13.056 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:43:13.056 slat (nsec): min=1752, max=9869.0k, avg=75399.72, stdev=534588.81 00:43:13.056 clat (usec): min=1246, max=60766, avg=9783.23, stdev=9037.47 00:43:13.056 lat (usec): min=1260, max=60775, avg=9858.63, stdev=9097.82 00:43:13.056 clat percentiles (usec): 00:43:13.056 | 1.00th=[ 3032], 5.00th=[ 4113], 10.00th=[ 4621], 20.00th=[ 5473], 00:43:13.056 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 8356], 00:43:13.056 | 70.00th=[ 8979], 80.00th=[11994], 90.00th=[14877], 95.00th=[20317], 00:43:13.056 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:43:13.056 | 99.99th=[60556] 00:43:13.056 bw ( KiB/s): min=24576, max=28672, per=30.47%, avg=26624.00, stdev=2896.31, samples=2 00:43:13.056 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:43:13.056 lat (msec) : 2=0.21%, 4=2.87%, 10=67.09%, 20=25.86%, 50=2.88% 00:43:13.056 lat (msec) : 100=1.09% 00:43:13.056 cpu : usr=4.57%, sys=7.26%, ctx=372, majf=0, minf=1 00:43:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.056 issued rwts: total=6509,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.056 job1: (groupid=0, jobs=1): err= 0: pid=2216966: Wed Oct 9 11:23:32 2024 00:43:13.056 read: IOPS=3609, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1010msec) 00:43:13.056 slat (nsec): min=881, max=8299.6k, avg=111584.28, stdev=665918.01 00:43:13.056 clat (usec): min=2452, max=61489, avg=11184.43, stdev=10041.90 00:43:13.056 lat (usec): min=2458, max=61497, avg=11296.02, stdev=10119.35 00:43:13.056 clat percentiles (usec): 00:43:13.056 | 1.00th=[ 3359], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6587], 00:43:13.056 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 8094], 00:43:13.056 | 70.00th=[ 8455], 80.00th=[10945], 90.00th=[22938], 95.00th=[41157], 00:43:13.056 | 99.00th=[47973], 99.50th=[54789], 99.90th=[61604], 99.95th=[61604], 00:43:13.056 | 99.99th=[61604] 00:43:13.056 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:43:13.056 slat (nsec): min=1602, max=9967.4k, avg=140598.94, stdev=654105.52 00:43:13.056 clat (usec): min=1147, max=109851, avg=21319.46, stdev=19078.74 00:43:13.056 lat (usec): min=1157, max=109859, avg=21460.06, stdev=19194.34 00:43:13.056 clat percentiles (msec): 00:43:13.056 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 12], 00:43:13.056 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 15], 00:43:13.056 | 70.00th=[ 21], 80.00th=[ 31], 90.00th=[ 48], 95.00th=[ 66], 00:43:13.056 | 99.00th=[ 99], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 110], 00:43:13.056 | 99.99th=[ 110] 00:43:13.056 bw ( KiB/s): min=15024, max=17216, per=18.45%, avg=16120.00, stdev=1549.98, samples=2 00:43:13.056 iops : min= 3756, max= 4304, avg=4030.00, stdev=387.49, samples=2 00:43:13.056 lat (msec) : 2=0.31%, 4=2.47%, 10=42.93%, 20=33.09%, 50=15.76% 00:43:13.056 lat (msec) : 100=5.05%, 250=0.39% 00:43:13.056 cpu : usr=2.48%, sys=4.16%, ctx=548, majf=0, minf=2 00:43:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.056 issued rwts: total=3646,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.056 job2: (groupid=0, jobs=1): err= 0: pid=2216983: Wed Oct 9 11:23:32 2024 00:43:13.056 read: IOPS=5130, BW=20.0MiB/s (21.0MB/s)(21.0MiB/1047msec) 00:43:13.056 slat (nsec): min=965, max=8122.3k, avg=80332.71, stdev=565788.66 00:43:13.056 clat (usec): min=2824, max=51207, avg=10699.08, stdev=7148.49 00:43:13.056 lat (usec): min=2844, max=58697, avg=10779.42, stdev=7174.36 00:43:13.056 clat percentiles (usec): 00:43:13.056 | 1.00th=[ 5276], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7373], 00:43:13.056 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:43:13.056 | 70.00th=[ 9765], 80.00th=[12518], 90.00th=[16057], 95.00th=[18744], 00:43:13.056 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:43:13.056 | 99.99th=[51119] 00:43:13.056 write: IOPS=5379, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1047msec); 0 zone resets 00:43:13.056 slat (nsec): min=1649, max=6713.1k, avg=96461.75, stdev=473540.61 00:43:13.056 clat (usec): min=1204, max=38087, avg=13364.51, stdev=7817.79 00:43:13.056 lat (usec): min=1220, max=38096, avg=13460.97, stdev=7858.37 00:43:13.056 clat percentiles (usec): 00:43:13.056 | 1.00th=[ 3556], 5.00th=[ 5407], 10.00th=[ 6456], 20.00th=[ 6980], 00:43:13.056 | 30.00th=[ 7504], 40.00th=[ 8979], 50.00th=[12125], 60.00th=[13173], 00:43:13.056 | 70.00th=[14615], 80.00th=[17433], 90.00th=[26608], 95.00th=[31327], 00:43:13.056 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:43:13.056 | 99.99th=[38011] 00:43:13.056 bw ( KiB/s): min=20480, max=24576, per=25.78%, avg=22528.00, stdev=2896.31, samples=2 00:43:13.056 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:43:13.056 lat (msec) : 2=0.02%, 4=1.02%, 10=55.48%, 20=32.30%, 50=10.07% 00:43:13.056 lat (msec) : 100=1.12% 00:43:13.056 cpu : usr=3.63%, sys=5.54%, ctx=564, majf=0, minf=1 00:43:13.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:13.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.056 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.057 job3: (groupid=0, jobs=1): err= 0: pid=2216989: Wed Oct 9 11:23:32 2024 00:43:13.057 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:43:13.057 slat (nsec): min=981, max=16126k, avg=72463.28, stdev=601705.66 00:43:13.057 clat (usec): min=2742, max=25258, avg=10589.82, stdev=3705.01 00:43:13.057 lat (usec): min=2748, max=25265, avg=10662.28, stdev=3731.86 00:43:13.057 clat percentiles (usec): 00:43:13.057 | 1.00th=[ 3982], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7504], 00:43:13.057 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10945], 00:43:13.057 | 70.00th=[12125], 80.00th=[13698], 90.00th=[15664], 95.00th=[17957], 00:43:13.057 | 99.00th=[20579], 99.50th=[21103], 99.90th=[24773], 99.95th=[24773], 00:43:13.057 | 99.99th=[25297] 00:43:13.057 write: IOPS=6440, BW=25.2MiB/s (26.4MB/s)(25.3MiB/1007msec); 0 zone resets 00:43:13.057 slat (nsec): min=1789, max=25465k, avg=71569.58, stdev=647776.60 00:43:13.057 clat (usec): min=762, max=35795, avg=9382.34, stdev=3661.57 00:43:13.057 lat (usec): min=1063, max=35830, avg=9453.91, stdev=3692.95 00:43:13.057 clat percentiles (usec): 00:43:13.057 | 1.00th=[ 3425], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 6390], 00:43:13.057 | 30.00th=[ 7177], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9503], 00:43:13.057 | 70.00th=[10683], 80.00th=[11994], 90.00th=[14484], 95.00th=[15664], 00:43:13.057 | 99.00th=[17433], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:43:13.057 | 99.99th=[35914] 00:43:13.057 bw ( KiB/s): min=24576, max=26296, per=29.11%, avg=25436.00, stdev=1216.22, samples=2 00:43:13.057 iops : min= 6144, max= 6574, avg=6359.00, stdev=304.06, samples=2 00:43:13.057 lat (usec) : 1000=0.01% 00:43:13.057 lat (msec) : 2=0.25%, 4=1.05%, 10=55.53%, 20=41.91%, 50=1.24% 00:43:13.057 cpu : usr=4.87%, sys=6.96%, ctx=292, majf=0, minf=1 00:43:13.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:13.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:13.057 issued rwts: total=6144,6486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:13.057 00:43:13.057 Run status group 0 (all jobs): 00:43:13.057 READ: bw=80.9MiB/s (84.8MB/s), 14.1MiB/s-25.2MiB/s (14.8MB/s-26.5MB/s), io=84.7MiB (88.8MB), run=1007-1047msec 00:43:13.057 WRITE: bw=85.3MiB/s (89.5MB/s), 15.8MiB/s-25.8MiB/s (16.6MB/s-27.1MB/s), io=89.3MiB (93.7MB), run=1007-1047msec 00:43:13.057 00:43:13.057 Disk stats (read/write): 00:43:13.057 nvme0n1: ios=4821/5120, merge=0/0, ticks=49453/53332, in_queue=102785, util=96.39% 00:43:13.057 nvme0n2: ios=3111/3199, merge=0/0, ticks=31512/71419, in_queue=102931, util=92.55% 00:43:13.057 nvme0n3: ios=4608/4815, merge=0/0, ticks=42060/59634, in_queue=101694, util=88.38% 00:43:13.057 nvme0n4: ios=5157/5485, merge=0/0, ticks=51590/47094, in_queue=98684, util=97.86% 00:43:13.057 11:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:13.057 [global] 00:43:13.057 thread=1 00:43:13.057 invalidate=1 00:43:13.057 rw=randwrite 00:43:13.057 time_based=1 00:43:13.057 runtime=1 00:43:13.057 ioengine=libaio 00:43:13.057 direct=1 00:43:13.057 bs=4096 00:43:13.057 iodepth=128 00:43:13.057 norandommap=0 00:43:13.057 numjobs=1 00:43:13.057 00:43:13.057 verify_dump=1 00:43:13.057 verify_backlog=512 00:43:13.057 verify_state_save=0 00:43:13.057 do_verify=1 00:43:13.057 verify=crc32c-intel 00:43:13.057 [job0] 00:43:13.057 filename=/dev/nvme0n1 00:43:13.057 [job1] 00:43:13.057 filename=/dev/nvme0n2 00:43:13.057 [job2] 00:43:13.057 filename=/dev/nvme0n3 00:43:13.057 [job3] 00:43:13.057 filename=/dev/nvme0n4 00:43:13.057 Could not set queue depth (nvme0n1) 00:43:13.057 Could not set queue depth (nvme0n2) 00:43:13.057 Could not set queue depth (nvme0n3) 00:43:13.057 Could not set queue depth (nvme0n4) 00:43:13.316 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.316 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.316 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.316 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:13.316 fio-3.35 00:43:13.316 Starting 4 threads 00:43:14.730 00:43:14.730 job0: (groupid=0, jobs=1): err= 0: pid=2217436: Wed Oct 9 11:23:34 2024 00:43:14.730 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:43:14.730 slat (nsec): min=884, max=7325.4k, avg=65266.04, stdev=358493.42 00:43:14.730 clat (usec): min=2272, max=34003, avg=8453.02, stdev=2162.30 00:43:14.730 lat (usec): min=2276, max=34007, avg=8518.29, stdev=2179.78 00:43:14.730 clat percentiles (usec): 00:43:14.730 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7373], 00:43:14.731 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8586], 00:43:14.731 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10290], 00:43:14.731 | 99.00th=[21365], 99.50th=[24249], 99.90th=[24249], 99.95th=[24249], 00:43:14.731 | 99.99th=[33817] 00:43:14.731 write: IOPS=7541, BW=29.5MiB/s (30.9MB/s)(29.5MiB/1003msec); 0 zone resets 00:43:14.731 slat (nsec): min=1487, max=15206k, avg=67263.44, stdev=462411.42 00:43:14.731 clat (usec): min=664, max=35739, avg=8794.73, stdev=4552.04 00:43:14.731 lat (usec): min=688, max=35747, avg=8861.99, stdev=4594.84 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 2638], 5.00th=[ 5080], 10.00th=[ 5735], 20.00th=[ 6783], 00:43:14.731 | 30.00th=[ 7111], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8029], 00:43:14.731 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[12911], 95.00th=[22414], 00:43:14.731 | 99.00th=[26084], 99.50th=[26870], 99.90th=[28181], 99.95th=[30016], 00:43:14.731 | 99.99th=[35914] 00:43:14.731 bw ( KiB/s): min=28672, max=30824, per=29.43%, avg=29748.00, stdev=1521.69, samples=2 00:43:14.731 iops : min= 7168, max= 7706, avg=7437.00, stdev=380.42, samples=2 00:43:14.731 lat (usec) : 750=0.01%, 1000=0.06% 00:43:14.731 lat (msec) : 2=0.03%, 4=2.13%, 10=88.00%, 20=6.14%, 50=3.63% 00:43:14.731 cpu : usr=2.99%, sys=5.09%, ctx=738, majf=0, minf=1 00:43:14.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:14.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.731 issued rwts: total=7168,7564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.731 job1: (groupid=0, jobs=1): err= 0: pid=2217441: Wed Oct 9 11:23:34 2024 00:43:14.731 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:43:14.731 slat (nsec): min=982, max=10091k, avg=82612.60, stdev=553820.33 00:43:14.731 clat (usec): min=4538, max=24615, avg=10199.21, stdev=3336.98 00:43:14.731 lat (usec): min=4545, max=24643, avg=10281.82, stdev=3366.54 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 5276], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7570], 00:43:14.731 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:43:14.731 | 70.00th=[11076], 80.00th=[12780], 90.00th=[14615], 95.00th=[17433], 00:43:14.731 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:43:14.731 | 99.99th=[24511] 00:43:14.731 write: IOPS=5585, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:43:14.731 slat (nsec): min=1657, max=14632k, avg=97245.54, stdev=518852.72 00:43:14.731 clat (usec): min=1792, max=29746, avg=13370.13, stdev=4909.88 00:43:14.731 lat (usec): min=1801, max=29755, avg=13467.38, stdev=4941.68 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 4686], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 8455], 00:43:14.731 | 30.00th=[10159], 40.00th=[11994], 50.00th=[13698], 60.00th=[14484], 00:43:14.731 | 70.00th=[15795], 80.00th=[17171], 90.00th=[20317], 95.00th=[21627], 00:43:14.731 | 99.00th=[25035], 99.50th=[25035], 99.90th=[26870], 99.95th=[26870], 00:43:14.731 | 99.99th=[29754] 00:43:14.731 bw ( KiB/s): min=20480, max=23544, per=21.78%, avg=22012.00, stdev=2166.58, samples=2 00:43:14.731 iops : min= 5120, max= 5886, avg=5503.00, stdev=541.64, samples=2 00:43:14.731 lat (msec) : 2=0.06%, 4=0.04%, 10=43.70%, 20=49.68%, 50=6.52% 00:43:14.731 cpu : usr=3.08%, sys=5.96%, ctx=570, majf=0, minf=1 00:43:14.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:14.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.731 issued rwts: total=5120,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.731 job2: (groupid=0, jobs=1): err= 0: pid=2217448: Wed Oct 9 11:23:34 2024 00:43:14.731 read: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec) 00:43:14.731 slat (nsec): min=1035, max=6846.7k, avg=58663.63, stdev=426524.35 00:43:14.731 clat (usec): min=3825, max=14540, avg=7881.81, stdev=1963.69 00:43:14.731 lat (usec): min=3828, max=14546, avg=7940.47, stdev=1984.55 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 4621], 5.00th=[ 5342], 10.00th=[ 5669], 20.00th=[ 6259], 00:43:14.731 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7767], 00:43:14.731 | 70.00th=[ 8717], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11469], 00:43:14.731 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[14091], 00:43:14.731 | 99.99th=[14484] 00:43:14.731 write: IOPS=7750, BW=30.3MiB/s (31.7MB/s)(30.5MiB/1006msec); 0 zone resets 00:43:14.731 slat (nsec): min=1684, max=20435k, avg=66115.95, stdev=557355.75 00:43:14.731 clat (usec): min=1173, max=52610, avg=8466.98, stdev=5536.20 00:43:14.731 lat (usec): min=1201, max=52621, avg=8533.09, stdev=5583.32 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 3523], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 5932], 00:43:14.731 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 7570], 00:43:14.731 | 70.00th=[ 7832], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[13960], 00:43:14.731 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38536], 99.95th=[44303], 00:43:14.731 | 99.99th=[52691] 00:43:14.731 bw ( KiB/s): min=28672, max=32824, per=30.42%, avg=30748.00, stdev=2935.91, samples=2 00:43:14.731 iops : min= 7168, max= 8206, avg=7687.00, stdev=733.98, samples=2 00:43:14.731 lat (msec) : 2=0.04%, 4=0.88%, 10=82.54%, 20=14.09%, 50=2.44% 00:43:14.731 lat (msec) : 100=0.01% 00:43:14.731 cpu : usr=3.68%, sys=7.86%, ctx=597, majf=0, minf=1 00:43:14.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:14.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.731 issued rwts: total=7680,7797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.731 job3: (groupid=0, jobs=1): err= 0: pid=2217455: Wed Oct 9 11:23:34 2024 00:43:14.731 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:43:14.731 slat (nsec): min=967, max=10737k, avg=136280.68, stdev=769611.61 00:43:14.731 clat (usec): min=2849, max=49890, avg=17571.64, stdev=8989.32 00:43:14.731 lat (usec): min=2852, max=49897, avg=17707.92, stdev=9062.40 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 3294], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[10159], 00:43:14.731 | 30.00th=[11731], 40.00th=[13042], 50.00th=[13829], 60.00th=[18220], 00:43:14.731 | 70.00th=[22676], 80.00th=[25560], 90.00th=[28967], 95.00th=[34866], 00:43:14.731 | 99.00th=[41157], 99.50th=[43779], 99.90th=[45351], 99.95th=[47973], 00:43:14.731 | 99.99th=[50070] 00:43:14.731 write: IOPS=4459, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1005msec); 0 zone resets 00:43:14.731 slat (nsec): min=1602, max=13548k, avg=93674.22, stdev=574370.24 00:43:14.731 clat (usec): min=2862, max=28742, avg=12316.79, stdev=3729.13 00:43:14.731 lat (usec): min=2864, max=29511, avg=12410.47, stdev=3775.80 00:43:14.731 clat percentiles (usec): 00:43:14.731 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 9372], 00:43:14.731 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[13042], 00:43:14.731 | 70.00th=[14091], 80.00th=[14615], 90.00th=[17433], 95.00th=[20841], 00:43:14.731 | 99.00th=[23200], 99.50th=[24773], 99.90th=[25297], 99.95th=[26608], 00:43:14.731 | 99.99th=[28705] 00:43:14.731 bw ( KiB/s): min=14360, max=20480, per=17.23%, avg=17420.00, stdev=4327.49, samples=2 00:43:14.731 iops : min= 3590, max= 5120, avg=4355.00, stdev=1081.87, samples=2 00:43:14.731 lat (msec) : 4=1.07%, 10=24.68%, 20=54.72%, 50=19.53% 00:43:14.731 cpu : usr=3.19%, sys=3.78%, ctx=427, majf=0, minf=1 00:43:14.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:14.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:14.731 issued rwts: total=4096,4482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:14.731 00:43:14.731 Run status group 0 (all jobs): 00:43:14.731 READ: bw=93.3MiB/s (97.8MB/s), 15.9MiB/s-29.8MiB/s (16.7MB/s-31.3MB/s), io=94.0MiB (98.6MB), run=1003-1008msec 00:43:14.731 WRITE: bw=98.7MiB/s (104MB/s), 17.4MiB/s-30.3MiB/s (18.3MB/s-31.7MB/s), io=99.5MiB (104MB), run=1003-1008msec 00:43:14.731 00:43:14.731 Disk stats (read/write): 00:43:14.731 nvme0n1: ios=6040/6144, merge=0/0, ticks=18427/23286, in_queue=41713, util=87.07% 00:43:14.731 nvme0n2: ios=4139/4471, merge=0/0, ticks=40682/61107, in_queue=101789, util=89.81% 00:43:14.731 nvme0n3: ios=6199/6456, merge=0/0, ticks=47209/47999, in_queue=95208, util=92.61% 00:43:14.731 nvme0n4: ios=3643/3983, merge=0/0, ticks=22759/19981, in_queue=42740, util=94.12% 00:43:14.731 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:14.731 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2217747 00:43:14.731 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:14.731 11:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:14.731 [global] 00:43:14.731 thread=1 00:43:14.731 invalidate=1 00:43:14.731 rw=read 00:43:14.731 time_based=1 00:43:14.731 runtime=10 00:43:14.732 ioengine=libaio 00:43:14.732 direct=1 00:43:14.732 bs=4096 00:43:14.732 iodepth=1 00:43:14.732 norandommap=1 00:43:14.732 numjobs=1 00:43:14.732 00:43:14.732 [job0] 00:43:14.732 filename=/dev/nvme0n1 00:43:14.732 [job1] 00:43:14.732 filename=/dev/nvme0n2 00:43:14.732 [job2] 00:43:14.732 filename=/dev/nvme0n3 00:43:14.732 [job3] 00:43:14.732 filename=/dev/nvme0n4 00:43:14.732 Could not set queue depth (nvme0n1) 00:43:14.732 Could not set queue depth (nvme0n2) 00:43:14.732 Could not set queue depth (nvme0n3) 00:43:14.732 Could not set queue depth (nvme0n4) 00:43:14.996 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.996 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.996 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.996 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:14.996 fio-3.35 00:43:14.996 Starting 4 threads 00:43:17.541 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:17.803 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:17.803 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:43:17.803 fio: pid=2217940, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:17.803 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10498048, buflen=4096 00:43:17.803 fio: pid=2217938, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:17.803 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:17.803 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:18.063 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11411456, buflen=4096 00:43:18.063 fio: pid=2217931, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:18.063 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.063 11:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:18.325 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11997184, buflen=4096 00:43:18.325 fio: pid=2217934, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:18.325 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.325 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:18.325 00:43:18.325 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217931: Wed Oct 9 11:23:38 2024 00:43:18.325 read: IOPS=942, BW=3770KiB/s (3860kB/s)(10.9MiB/2956msec) 00:43:18.325 slat (usec): min=6, max=30061, avg=42.49, stdev=628.60 00:43:18.325 clat (usec): min=554, max=1364, avg=1004.49, stdev=85.09 00:43:18.325 lat (usec): min=581, max=31191, avg=1046.99, stdev=637.24 00:43:18.325 clat percentiles (usec): 00:43:18.325 | 1.00th=[ 766], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:43:18.325 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:43:18.325 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:43:18.325 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1287], 99.95th=[ 1319], 00:43:18.325 | 99.99th=[ 1369] 00:43:18.325 bw ( KiB/s): min= 3800, max= 3912, per=36.22%, avg=3846.40, stdev=43.23, samples=5 00:43:18.325 iops : min= 950, max= 978, avg=961.60, stdev=10.81, samples=5 00:43:18.325 lat (usec) : 750=0.57%, 1000=42.20% 00:43:18.325 lat (msec) : 2=57.19% 00:43:18.325 cpu : usr=1.35%, sys=4.09%, ctx=2791, majf=0, minf=2 00:43:18.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.325 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.325 issued rwts: total=2787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.325 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217934: Wed Oct 9 11:23:38 2024 00:43:18.325 read: IOPS=932, BW=3729KiB/s (3818kB/s)(11.4MiB/3142msec) 00:43:18.325 slat (usec): min=6, max=14912, avg=46.94, stdev=464.58 00:43:18.325 clat (usec): min=474, max=1907, avg=1010.22, stdev=111.60 00:43:18.325 lat (usec): min=501, max=16020, avg=1057.17, stdev=479.87 00:43:18.325 clat percentiles (usec): 00:43:18.325 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 938], 00:43:18.325 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:43:18.325 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:43:18.325 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1385], 99.95th=[ 1434], 00:43:18.325 | 99.99th=[ 1909] 00:43:18.325 bw ( KiB/s): min= 3594, max= 3896, per=35.48%, avg=3768.33, stdev=108.23, samples=6 00:43:18.325 iops : min= 898, max= 974, avg=942.00, stdev=27.22, samples=6 00:43:18.325 lat (usec) : 500=0.03%, 750=2.83%, 1000=38.67% 00:43:18.325 lat (msec) : 2=58.43% 00:43:18.325 cpu : usr=1.56%, sys=3.82%, ctx=2936, majf=0, minf=2 00:43:18.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.325 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.325 issued rwts: total=2930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.325 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217938: Wed Oct 9 11:23:38 2024 00:43:18.325 read: IOPS=921, BW=3686KiB/s (3775kB/s)(10.0MiB/2781msec) 00:43:18.325 slat (usec): min=7, max=16419, avg=38.11, stdev=398.41 00:43:18.325 clat (usec): min=204, max=2193, avg=1030.16, stdev=98.78 00:43:18.325 lat (usec): min=231, max=17501, avg=1068.28, stdev=412.69 00:43:18.325 clat percentiles (usec): 00:43:18.325 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 963], 00:43:18.325 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:43:18.325 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:43:18.325 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1336], 00:43:18.325 | 99.99th=[ 2180] 00:43:18.325 bw ( KiB/s): min= 3728, max= 3832, per=35.45%, avg=3764.80, stdev=43.30, samples=5 00:43:18.326 iops : min= 932, max= 958, avg=941.20, stdev=10.83, samples=5 00:43:18.326 lat (usec) : 250=0.04%, 750=0.98%, 1000=29.49% 00:43:18.326 lat (msec) : 2=69.42%, 4=0.04% 00:43:18.326 cpu : usr=1.15%, sys=2.73%, ctx=2568, majf=0, minf=1 00:43:18.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.326 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.326 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.326 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217940: Wed Oct 9 11:23:38 2024 00:43:18.326 read: IOPS=24, BW=96.1KiB/s (98.5kB/s)(252KiB/2621msec) 00:43:18.326 slat (nsec): min=14514, max=36966, avg=25701.34, stdev=2012.62 00:43:18.326 clat (usec): min=786, max=42126, avg=41227.63, stdev=5184.51 00:43:18.326 lat (usec): min=823, max=42152, avg=41253.34, stdev=5183.09 00:43:18.326 clat percentiles (usec): 00:43:18.326 | 1.00th=[ 783], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:43:18.326 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:43:18.326 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:18.326 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:18.326 | 99.99th=[42206] 00:43:18.326 bw ( KiB/s): min= 96, max= 96, per=0.90%, avg=96.00, stdev= 0.00, samples=5 00:43:18.326 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:43:18.326 lat (usec) : 1000=1.56% 00:43:18.326 lat (msec) : 50=96.88% 00:43:18.326 cpu : usr=0.11%, sys=0.00%, ctx=64, majf=0, minf=1 00:43:18.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:18.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.326 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:18.326 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:18.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:18.326 00:43:18.326 Run status group 0 (all jobs): 00:43:18.326 READ: bw=10.4MiB/s (10.9MB/s), 96.1KiB/s-3770KiB/s (98.5kB/s-3860kB/s), io=32.6MiB (34.2MB), run=2621-3142msec 00:43:18.326 00:43:18.326 Disk stats (read/write): 00:43:18.326 nvme0n1: ios=2693/0, merge=0/0, ticks=2511/0, in_queue=2511, util=93.29% 00:43:18.326 nvme0n2: ios=2900/0, merge=0/0, ticks=2686/0, in_queue=2686, util=93.93% 00:43:18.326 nvme0n3: ios=2471/0, merge=0/0, ticks=2786/0, in_queue=2786, util=99.26% 00:43:18.326 nvme0n4: ios=62/0, merge=0/0, ticks=2557/0, in_queue=2557, util=96.42% 00:43:18.326 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.326 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:18.586 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.586 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:18.848 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:18.848 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:19.108 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:19.108 11:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:19.108 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:19.108 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2217747 00:43:19.108 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:19.108 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:19.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:19.369 nvmf hotplug test: fio failed as expected 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:19.369 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:19.630 rmmod nvme_tcp 00:43:19.630 rmmod nvme_fabrics 00:43:19.630 rmmod nvme_keyring 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2214579 ']' 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2214579 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2214579 ']' 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2214579 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2214579 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2214579' 00:43:19.630 killing process with pid 2214579 00:43:19.630 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2214579 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2214579 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:19.631 11:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:22.176 00:43:22.176 real 0m27.920s 00:43:22.176 user 2m12.863s 00:43:22.176 sys 0m12.337s 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.176 ************************************ 00:43:22.176 END TEST nvmf_fio_target 00:43:22.176 ************************************ 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:22.176 ************************************ 00:43:22.176 START TEST nvmf_bdevio 00:43:22.176 ************************************ 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.176 * Looking for test storage... 00:43:22.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:22.176 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:22.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.176 --rc genhtml_branch_coverage=1 00:43:22.176 --rc genhtml_function_coverage=1 00:43:22.176 --rc genhtml_legend=1 00:43:22.176 --rc geninfo_all_blocks=1 00:43:22.176 --rc geninfo_unexecuted_blocks=1 00:43:22.176 00:43:22.176 ' 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:22.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.177 --rc genhtml_branch_coverage=1 00:43:22.177 --rc genhtml_function_coverage=1 00:43:22.177 --rc genhtml_legend=1 00:43:22.177 --rc geninfo_all_blocks=1 00:43:22.177 --rc geninfo_unexecuted_blocks=1 00:43:22.177 00:43:22.177 ' 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:22.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.177 --rc genhtml_branch_coverage=1 00:43:22.177 --rc genhtml_function_coverage=1 00:43:22.177 --rc genhtml_legend=1 00:43:22.177 --rc geninfo_all_blocks=1 00:43:22.177 --rc geninfo_unexecuted_blocks=1 00:43:22.177 00:43:22.177 ' 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:22.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.177 --rc genhtml_branch_coverage=1 00:43:22.177 --rc genhtml_function_coverage=1 00:43:22.177 --rc genhtml_legend=1 00:43:22.177 --rc geninfo_all_blocks=1 00:43:22.177 --rc geninfo_unexecuted_blocks=1 00:43:22.177 00:43:22.177 ' 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:22.177 11:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:22.177 11:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:30.318 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:30.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:30.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:30.319 Found net devices under 0000:31:00.0: cvl_0_0 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:30.319 Found net devices under 0000:31:00.1: cvl_0_1 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:30.319 11:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:30.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:30.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:43:30.319 00:43:30.319 --- 10.0.0.2 ping statistics --- 00:43:30.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.319 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:30.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:30.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:43:30.319 00:43:30.319 --- 10.0.0.1 ping statistics --- 00:43:30.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:30.319 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:43:30.319 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2223017 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2223017 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2223017 ']' 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:30.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:30.320 11:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.320 [2024-10-09 11:23:49.414764] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:30.320 [2024-10-09 11:23:49.416043] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:43:30.320 [2024-10-09 11:23:49.416094] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:30.320 [2024-10-09 11:23:49.557802] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:30.320 [2024-10-09 11:23:49.606588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:30.320 [2024-10-09 11:23:49.634203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:30.320 [2024-10-09 11:23:49.634249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:30.320 [2024-10-09 11:23:49.634259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:30.320 [2024-10-09 11:23:49.634266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:30.320 [2024-10-09 11:23:49.634272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:30.320 [2024-10-09 11:23:49.636221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:30.320 [2024-10-09 11:23:49.636382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:30.320 [2024-10-09 11:23:49.636533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:30.320 [2024-10-09 11:23:49.636533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:30.320 [2024-10-09 11:23:49.709935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:30.320 [2024-10-09 11:23:49.710970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:30.320 [2024-10-09 11:23:49.711173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:30.320 [2024-10-09 11:23:49.711720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:30.320 [2024-10-09 11:23:49.711769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.320 [2024-10-09 11:23:50.273630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.320 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.582 Malloc0 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:30.582 [2024-10-09 11:23:50.369850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:30.582 { 00:43:30.582 "params": { 00:43:30.582 "name": "Nvme$subsystem", 00:43:30.582 "trtype": "$TEST_TRANSPORT", 00:43:30.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.582 "adrfam": "ipv4", 00:43:30.582 "trsvcid": "$NVMF_PORT", 00:43:30.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.582 "hdgst": ${hdgst:-false}, 00:43:30.582 "ddgst": ${ddgst:-false} 00:43:30.582 }, 00:43:30.582 "method": "bdev_nvme_attach_controller" 00:43:30.582 } 00:43:30.582 EOF 00:43:30.582 )") 00:43:30.582 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:43:30.583 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:43:30.583 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:43:30.583 11:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:30.583 "params": { 00:43:30.583 "name": "Nvme1", 00:43:30.583 "trtype": "tcp", 00:43:30.583 "traddr": "10.0.0.2", 00:43:30.583 "adrfam": "ipv4", 00:43:30.583 "trsvcid": "4420", 00:43:30.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:30.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:30.583 "hdgst": false, 00:43:30.583 "ddgst": false 00:43:30.583 }, 00:43:30.583 "method": "bdev_nvme_attach_controller" 00:43:30.583 }' 00:43:30.583 [2024-10-09 11:23:50.428654] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:43:30.583 [2024-10-09 11:23:50.428725] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223325 ] 00:43:30.583 [2024-10-09 11:23:50.563162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:30.843 [2024-10-09 11:23:50.595982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:30.844 [2024-10-09 11:23:50.617672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:30.844 [2024-10-09 11:23:50.617849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:30.844 [2024-10-09 11:23:50.617853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:30.844 I/O targets: 00:43:30.844 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:30.844 00:43:30.844 00:43:30.844 CUnit - A unit testing framework for C - Version 2.1-3 00:43:30.844 http://cunit.sourceforge.net/ 00:43:30.844 00:43:30.844 00:43:30.844 Suite: bdevio tests on: Nvme1n1 00:43:30.844 Test: blockdev write read block ...passed 00:43:31.104 Test: blockdev write zeroes read block ...passed 00:43:31.104 Test: blockdev write zeroes read no split ...passed 00:43:31.104 Test: blockdev write zeroes read split ...passed 00:43:31.104 Test: blockdev write zeroes read split partial ...passed 00:43:31.104 Test: blockdev reset ...[2024-10-09 11:23:50.952387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:31.104 [2024-10-09 11:23:50.952458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1483590 (9): Bad file descriptor 00:43:31.104 [2024-10-09 11:23:51.005515] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:31.104 passed 00:43:31.104 Test: blockdev write read 8 blocks ...passed 00:43:31.104 Test: blockdev write read size > 128k ...passed 00:43:31.104 Test: blockdev write read invalid size ...passed 00:43:31.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:31.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:31.364 Test: blockdev write read max offset ...passed 00:43:31.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:31.364 Test: blockdev writev readv 8 blocks ...passed 00:43:31.364 Test: blockdev writev readv 30 x 1block ...passed 00:43:31.364 Test: blockdev writev readv block ...passed 00:43:31.364 Test: blockdev writev readv size > 128k ...passed 00:43:31.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:31.364 Test: blockdev comparev and writev ...[2024-10-09 11:23:51.349379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.349409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.349421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.349426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.349869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.349878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.349888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.350313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.350322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.350331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.350336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.350764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.350774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:31.364 [2024-10-09 11:23:51.350783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:31.364 [2024-10-09 11:23:51.350789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:31.625 passed 00:43:31.625 Test: blockdev nvme passthru rw ...passed 00:43:31.625 Test: blockdev nvme passthru vendor specific ...[2024-10-09 11:23:51.434929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.625 [2024-10-09 11:23:51.434941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:31.625 [2024-10-09 11:23:51.435143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.625 [2024-10-09 11:23:51.435151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:31.625 [2024-10-09 11:23:51.435388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.625 [2024-10-09 11:23:51.435399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:31.625 [2024-10-09 11:23:51.435610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:31.625 [2024-10-09 11:23:51.435618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:31.625 passed 00:43:31.625 Test: blockdev nvme admin passthru ...passed 00:43:31.625 Test: blockdev copy ...passed 00:43:31.625 00:43:31.625 Run Summary: Type Total Ran Passed Failed Inactive 00:43:31.625 suites 1 1 n/a 0 0 00:43:31.625 tests 23 23 23 0 0 00:43:31.625 asserts 152 152 152 0 n/a 00:43:31.625 00:43:31.625 Elapsed time = 1.419 seconds 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:31.625 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:31.625 rmmod nvme_tcp 00:43:31.625 rmmod nvme_fabrics 00:43:31.886 rmmod nvme_keyring 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2223017 ']' 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2223017 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2223017 ']' 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2223017 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2223017 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2223017' 00:43:31.886 killing process with pid 2223017 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2223017 00:43:31.886 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2223017 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:32.147 11:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.059 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:34.059 00:43:34.059 real 0m12.232s 00:43:34.059 user 0m9.988s 00:43:34.059 sys 0m6.374s 00:43:34.059 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:34.059 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:34.059 ************************************ 00:43:34.059 END TEST nvmf_bdevio 00:43:34.060 ************************************ 00:43:34.060 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:34.060 00:43:34.060 real 4m57.360s 00:43:34.060 user 10m4.348s 00:43:34.060 sys 2m4.333s 00:43:34.060 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:34.060 11:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:34.060 ************************************ 00:43:34.060 END TEST nvmf_target_core_interrupt_mode 00:43:34.060 ************************************ 00:43:34.320 11:23:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:34.321 11:23:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:34.321 11:23:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:34.321 11:23:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:34.321 ************************************ 00:43:34.321 START TEST nvmf_interrupt 00:43:34.321 ************************************ 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:34.321 * Looking for test storage... 00:43:34.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:34.321 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.582 --rc genhtml_branch_coverage=1 00:43:34.582 --rc genhtml_function_coverage=1 00:43:34.582 --rc genhtml_legend=1 00:43:34.582 --rc geninfo_all_blocks=1 00:43:34.582 --rc geninfo_unexecuted_blocks=1 00:43:34.582 00:43:34.582 ' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.582 --rc genhtml_branch_coverage=1 00:43:34.582 --rc genhtml_function_coverage=1 00:43:34.582 --rc genhtml_legend=1 00:43:34.582 --rc geninfo_all_blocks=1 00:43:34.582 --rc geninfo_unexecuted_blocks=1 00:43:34.582 00:43:34.582 ' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.582 --rc genhtml_branch_coverage=1 00:43:34.582 --rc genhtml_function_coverage=1 00:43:34.582 --rc genhtml_legend=1 00:43:34.582 --rc geninfo_all_blocks=1 00:43:34.582 --rc geninfo_unexecuted_blocks=1 00:43:34.582 00:43:34.582 ' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:34.582 --rc genhtml_branch_coverage=1 00:43:34.582 --rc genhtml_function_coverage=1 00:43:34.582 --rc genhtml_legend=1 00:43:34.582 --rc geninfo_all_blocks=1 00:43:34.582 --rc geninfo_unexecuted_blocks=1 00:43:34.582 00:43:34.582 ' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:34.582 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:34.583 11:23:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:42.719 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.719 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:42.720 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:42.720 Found net devices under 0000:31:00.0: cvl_0_0 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:42.720 Found net devices under 0000:31:00.1: cvl_0_1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:42.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:42.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:43:42.720 00:43:42.720 --- 10.0.0.2 ping statistics --- 00:43:42.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.720 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:42.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:42.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:43:42.720 00:43:42.720 --- 10.0.0.1 ping statistics --- 00:43:42.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:42.720 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2227873 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2227873 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2227873 ']' 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:42.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:42.720 11:24:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.720 [2024-10-09 11:24:01.829568] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:42.720 [2024-10-09 11:24:01.831218] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:43:42.720 [2024-10-09 11:24:01.831282] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:42.720 [2024-10-09 11:24:01.974617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:43:42.720 [2024-10-09 11:24:02.007284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:42.720 [2024-10-09 11:24:02.029087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:42.720 [2024-10-09 11:24:02.029125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:42.720 [2024-10-09 11:24:02.029133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:42.720 [2024-10-09 11:24:02.029140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:42.720 [2024-10-09 11:24:02.029146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:42.720 [2024-10-09 11:24:02.030536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:42.720 [2024-10-09 11:24:02.030743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:42.720 [2024-10-09 11:24:02.081771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:42.720 [2024-10-09 11:24:02.082282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:42.720 [2024-10-09 11:24:02.082640] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:42.720 5000+0 records in 00:43:42.720 5000+0 records out 00:43:42.720 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0192523 s, 532 MB/s 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.720 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.981 AIO0 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.981 [2024-10-09 11:24:02.759286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:42.981 [2024-10-09 11:24:02.799677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2227873 0 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 0 idle 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:42.981 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227873 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.21 reactor_0' 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227873 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.21 reactor_0 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2227873 1 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 1 idle 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:43.242 11:24:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227896 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.00 reactor_1' 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227896 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:00.00 reactor_1 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2228135 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2227873 0 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2227873 0 busy 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:43.242 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:43.243 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:43.243 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:43.243 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:43.243 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227873 root 20 0 128.2g 42624 31104 S 13.3 0.0 0:00.23 reactor_0' 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227873 root 20 0 128.2g 42624 31104 S 13.3 0.0 0:00.23 reactor_0 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:43.503 11:24:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:44.441 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:44.441 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:44.441 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:44.441 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:44.701 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227873 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:02.29 reactor_0' 00:43:44.701 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227873 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:02.29 reactor_0 00:43:44.701 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:44.701 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:44.701 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2227873 1 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2227873 1 busy 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:44.702 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227896 root 20 0 128.2g 42624 31104 R 93.3 0.0 0:01.22 reactor_1' 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227896 root 20 0 128.2g 42624 31104 R 93.3 0.0 0:01.22 reactor_1 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:44.962 11:24:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2228135 00:43:54.957 Initializing NVMe Controllers 00:43:54.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:54.957 Controller IO queue size 256, less than required. 00:43:54.957 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:54.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:54.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:54.957 Initialization complete. Launching workers. 00:43:54.957 ======================================================== 00:43:54.957 Latency(us) 00:43:54.957 Device Information : IOPS MiB/s Average min max 00:43:54.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16389.59 64.02 15629.47 2466.10 17972.08 00:43:54.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18581.69 72.58 13778.58 7391.38 31598.02 00:43:54.957 ======================================================== 00:43:54.957 Total : 34971.29 136.61 14646.02 2466.10 31598.02 00:43:54.957 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2227873 0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 0 idle 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227873 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.18 reactor_0' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227873 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:20.18 reactor_0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2227873 1 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 1 idle 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227896 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:09.98 reactor_1' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227896 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:09.98 reactor_1 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:54.957 11:24:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:54.957 11:24:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:54.957 11:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:43:54.957 11:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:43:54.957 11:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:43:54.957 11:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2227873 0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 0 idle 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227873 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.40 reactor_0' 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227873 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.40 reactor_0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2227873 1 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2227873 1 idle 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2227873 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2227873 -w 256 00:43:56.871 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2227896 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.10 reactor_1' 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2227896 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.10 reactor_1 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:57.132 11:24:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:57.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:57.132 rmmod nvme_tcp 00:43:57.132 rmmod nvme_fabrics 00:43:57.132 rmmod nvme_keyring 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2227873 ']' 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2227873 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2227873 ']' 00:43:57.132 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2227873 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2227873 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2227873' 00:43:57.394 killing process with pid 2227873 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2227873 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2227873 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:57.394 11:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.941 11:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:59.941 00:43:59.941 real 0m25.283s 00:43:59.941 user 0m40.105s 00:43:59.941 sys 0m9.566s 00:43:59.941 11:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:59.941 11:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:59.941 ************************************ 00:43:59.941 END TEST nvmf_interrupt 00:43:59.941 ************************************ 00:43:59.941 00:43:59.941 real 38m17.716s 00:43:59.941 user 92m13.671s 00:43:59.941 sys 11m10.576s 00:43:59.941 11:24:19 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:59.941 11:24:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.941 ************************************ 00:43:59.941 END TEST nvmf_tcp 00:43:59.941 ************************************ 00:43:59.941 11:24:19 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:43:59.941 11:24:19 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:59.941 11:24:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:59.941 11:24:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:59.941 11:24:19 -- common/autotest_common.sh@10 -- # set +x 00:43:59.941 ************************************ 00:43:59.941 START TEST spdkcli_nvmf_tcp 00:43:59.941 ************************************ 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:59.941 * Looking for test storage... 00:43:59.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:59.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.941 --rc genhtml_branch_coverage=1 00:43:59.941 --rc genhtml_function_coverage=1 00:43:59.941 --rc genhtml_legend=1 00:43:59.941 --rc geninfo_all_blocks=1 00:43:59.941 --rc geninfo_unexecuted_blocks=1 00:43:59.941 00:43:59.941 ' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:59.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.941 --rc genhtml_branch_coverage=1 00:43:59.941 --rc genhtml_function_coverage=1 00:43:59.941 --rc genhtml_legend=1 00:43:59.941 --rc geninfo_all_blocks=1 00:43:59.941 --rc geninfo_unexecuted_blocks=1 00:43:59.941 00:43:59.941 ' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:59.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.941 --rc genhtml_branch_coverage=1 00:43:59.941 --rc genhtml_function_coverage=1 00:43:59.941 --rc genhtml_legend=1 00:43:59.941 --rc geninfo_all_blocks=1 00:43:59.941 --rc geninfo_unexecuted_blocks=1 00:43:59.941 00:43:59.941 ' 00:43:59.941 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:59.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.941 --rc genhtml_branch_coverage=1 00:43:59.941 --rc genhtml_function_coverage=1 00:43:59.941 --rc genhtml_legend=1 00:43:59.941 --rc geninfo_all_blocks=1 00:43:59.941 --rc geninfo_unexecuted_blocks=1 00:43:59.941 00:43:59.942 ' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:59.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2231872 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2231872 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2231872 ']' 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:59.942 11:24:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.942 [2024-10-09 11:24:19.818140] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:43:59.942 [2024-10-09 11:24:19.818199] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231872 ] 00:44:00.203 [2024-10-09 11:24:19.949059] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:00.203 [2024-10-09 11:24:19.981804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:00.203 [2024-10-09 11:24:20.001224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.203 [2024-10-09 11:24:20.001228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:00.776 11:24:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:00.776 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:00.776 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:00.776 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:00.776 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:00.776 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:00.776 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:00.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.776 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:00.776 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:00.776 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:00.776 ' 00:44:04.081 [2024-10-09 11:24:23.338535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:05.024 [2024-10-09 11:24:24.703812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:07.567 [2024-10-09 11:24:27.233289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:09.480 [2024-10-09 11:24:29.438832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:11.391 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:11.391 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:11.391 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:11.391 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:11.391 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:11.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:11.391 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:11.391 11:24:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:11.650 11:24:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.910 11:24:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:11.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:11.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:11.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:11.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:11.910 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:11.910 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:11.910 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:11.910 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:11.910 ' 00:44:17.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:17.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:17.211 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:17.212 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:17.212 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:17.212 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:17.212 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:17.212 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:17.212 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2231872 ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2231872' 00:44:17.212 killing process with pid 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2231872 ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2231872 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2231872 ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2231872 00:44:17.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2231872) - No such process 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2231872 is not found' 00:44:17.212 Process with pid 2231872 is not found 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:17.212 00:44:17.212 real 0m17.434s 00:44:17.212 user 0m37.795s 00:44:17.212 sys 0m0.760s 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:17.212 11:24:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.212 ************************************ 00:44:17.212 END TEST spdkcli_nvmf_tcp 00:44:17.212 ************************************ 00:44:17.212 11:24:37 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:17.212 11:24:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:17.212 11:24:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:17.212 11:24:37 -- common/autotest_common.sh@10 -- # set +x 00:44:17.212 ************************************ 00:44:17.212 START TEST nvmf_identify_passthru 00:44:17.212 ************************************ 00:44:17.212 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:17.212 * Looking for test storage... 00:44:17.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:17.212 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:17.212 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:44:17.212 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:17.473 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:17.473 11:24:37 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:17.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.474 --rc genhtml_branch_coverage=1 00:44:17.474 --rc genhtml_function_coverage=1 00:44:17.474 --rc genhtml_legend=1 00:44:17.474 --rc geninfo_all_blocks=1 00:44:17.474 --rc geninfo_unexecuted_blocks=1 00:44:17.474 00:44:17.474 ' 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:17.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.474 --rc genhtml_branch_coverage=1 00:44:17.474 --rc genhtml_function_coverage=1 00:44:17.474 --rc genhtml_legend=1 00:44:17.474 --rc geninfo_all_blocks=1 00:44:17.474 --rc geninfo_unexecuted_blocks=1 00:44:17.474 00:44:17.474 ' 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:17.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.474 --rc genhtml_branch_coverage=1 00:44:17.474 --rc genhtml_function_coverage=1 00:44:17.474 --rc genhtml_legend=1 00:44:17.474 --rc geninfo_all_blocks=1 00:44:17.474 --rc geninfo_unexecuted_blocks=1 00:44:17.474 00:44:17.474 ' 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:17.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:17.474 --rc genhtml_branch_coverage=1 00:44:17.474 --rc genhtml_function_coverage=1 00:44:17.474 --rc genhtml_legend=1 00:44:17.474 --rc geninfo_all_blocks=1 00:44:17.474 --rc geninfo_unexecuted_blocks=1 00:44:17.474 00:44:17.474 ' 00:44:17.474 11:24:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:17.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:17.474 11:24:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:17.474 11:24:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:17.474 11:24:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:17.474 11:24:37 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:17.474 11:24:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:25.614 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:25.614 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:25.614 Found net devices under 0000:31:00.0: cvl_0_0 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:25.614 Found net devices under 0000:31:00.1: cvl_0_1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:25.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:25.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:44:25.614 00:44:25.614 --- 10.0.0.2 ping statistics --- 00:44:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:25.614 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:25.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:25.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:44:25.614 00:44:25.614 --- 10.0.0.1 ping statistics --- 00:44:25.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:25.614 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:25.614 11:24:44 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:44:25.615 11:24:44 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:25.615 11:24:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:25.615 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:44:25.615 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:25.615 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:25.615 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2239029 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2239029 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2239029 ']' 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:25.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:25.875 11:24:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.875 11:24:45 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:26.187 [2024-10-09 11:24:45.932411] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:44:26.187 [2024-10-09 11:24:45.932475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:26.187 [2024-10-09 11:24:46.070136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:26.187 [2024-10-09 11:24:46.100944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:26.187 [2024-10-09 11:24:46.119391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:26.187 [2024-10-09 11:24:46.119421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:26.188 [2024-10-09 11:24:46.119429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:26.188 [2024-10-09 11:24:46.119436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:26.188 [2024-10-09 11:24:46.119442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:26.188 [2024-10-09 11:24:46.121079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.188 [2024-10-09 11:24:46.121197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:26.188 [2024-10-09 11:24:46.121353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.188 [2024-10-09 11:24:46.121354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:44:26.812 11:24:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.812 INFO: Log level set to 20 00:44:26.812 INFO: Requests: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "method": "nvmf_set_config", 00:44:26.812 "id": 1, 00:44:26.812 "params": { 00:44:26.812 "admin_cmd_passthru": { 00:44:26.812 "identify_ctrlr": true 00:44:26.812 } 00:44:26.812 } 00:44:26.812 } 00:44:26.812 00:44:26.812 INFO: response: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "id": 1, 00:44:26.812 "result": true 00:44:26.812 } 00:44:26.812 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.812 11:24:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.812 INFO: Setting log level to 20 00:44:26.812 INFO: Setting log level to 20 00:44:26.812 INFO: Log level set to 20 00:44:26.812 INFO: Log level set to 20 00:44:26.812 INFO: Requests: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "method": "framework_start_init", 00:44:26.812 "id": 1 00:44:26.812 } 00:44:26.812 00:44:26.812 INFO: Requests: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "method": "framework_start_init", 00:44:26.812 "id": 1 00:44:26.812 } 00:44:26.812 00:44:26.812 [2024-10-09 11:24:46.793161] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:26.812 INFO: response: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "id": 1, 00:44:26.812 "result": true 00:44:26.812 } 00:44:26.812 00:44:26.812 INFO: response: 00:44:26.812 { 00:44:26.812 "jsonrpc": "2.0", 00:44:26.812 "id": 1, 00:44:26.812 "result": true 00:44:26.812 } 00:44:26.812 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:26.812 11:24:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:26.812 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.812 INFO: Setting log level to 40 00:44:26.812 INFO: Setting log level to 40 00:44:26.812 INFO: Setting log level to 40 00:44:27.097 [2024-10-09 11:24:46.806470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.097 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.097 11:24:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:27.097 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:27.097 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.097 11:24:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:27.097 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.097 11:24:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.369 Nvme0n1 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.369 [2024-10-09 11:24:47.191476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.369 [ 00:44:27.369 { 00:44:27.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:27.369 "subtype": "Discovery", 00:44:27.369 "listen_addresses": [], 00:44:27.369 "allow_any_host": true, 00:44:27.369 "hosts": [] 00:44:27.369 }, 00:44:27.369 { 00:44:27.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:27.369 "subtype": "NVMe", 00:44:27.369 "listen_addresses": [ 00:44:27.369 { 00:44:27.369 "trtype": "TCP", 00:44:27.369 "adrfam": "IPv4", 00:44:27.369 "traddr": "10.0.0.2", 00:44:27.369 "trsvcid": "4420" 00:44:27.369 } 00:44:27.369 ], 00:44:27.369 "allow_any_host": true, 00:44:27.369 "hosts": [], 00:44:27.369 "serial_number": "SPDK00000000000001", 00:44:27.369 "model_number": "SPDK bdev Controller", 00:44:27.369 "max_namespaces": 1, 00:44:27.369 "min_cntlid": 1, 00:44:27.369 "max_cntlid": 65519, 00:44:27.369 "namespaces": [ 00:44:27.369 { 00:44:27.369 "nsid": 1, 00:44:27.369 "bdev_name": "Nvme0n1", 00:44:27.369 "name": "Nvme0n1", 00:44:27.369 "nguid": "3634473052605494002538450000002B", 00:44:27.369 "uuid": "36344730-5260-5494-0025-38450000002b" 00:44:27.369 } 00:44:27.369 ] 00:44:27.369 } 00:44:27.369 ] 00:44:27.369 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:27.369 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:27.630 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:44:27.630 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:27.630 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:27.630 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:27.891 11:24:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:27.891 rmmod nvme_tcp 00:44:27.891 rmmod nvme_fabrics 00:44:27.891 rmmod nvme_keyring 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2239029 ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2239029 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2239029 ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2239029 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:27.891 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2239029 00:44:28.152 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:28.152 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:28.152 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2239029' 00:44:28.152 killing process with pid 2239029 00:44:28.152 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2239029 00:44:28.152 11:24:47 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2239029 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:28.412 11:24:48 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:28.412 11:24:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:28.412 11:24:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:30.326 11:24:50 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:30.326 00:44:30.326 real 0m13.227s 00:44:30.326 user 0m10.366s 00:44:30.326 sys 0m6.583s 00:44:30.326 11:24:50 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:30.326 11:24:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.326 ************************************ 00:44:30.326 END TEST nvmf_identify_passthru 00:44:30.326 ************************************ 00:44:30.326 11:24:50 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:30.326 11:24:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:30.326 11:24:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:30.326 11:24:50 -- common/autotest_common.sh@10 -- # set +x 00:44:30.588 ************************************ 00:44:30.588 START TEST nvmf_dif 00:44:30.588 ************************************ 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:30.588 * Looking for test storage... 00:44:30.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:30.588 11:24:50 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.588 --rc genhtml_branch_coverage=1 00:44:30.588 --rc genhtml_function_coverage=1 00:44:30.588 --rc genhtml_legend=1 00:44:30.588 --rc geninfo_all_blocks=1 00:44:30.588 --rc geninfo_unexecuted_blocks=1 00:44:30.588 00:44:30.588 ' 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.588 --rc genhtml_branch_coverage=1 00:44:30.588 --rc genhtml_function_coverage=1 00:44:30.588 --rc genhtml_legend=1 00:44:30.588 --rc geninfo_all_blocks=1 00:44:30.588 --rc geninfo_unexecuted_blocks=1 00:44:30.588 00:44:30.588 ' 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.588 --rc genhtml_branch_coverage=1 00:44:30.588 --rc genhtml_function_coverage=1 00:44:30.588 --rc genhtml_legend=1 00:44:30.588 --rc geninfo_all_blocks=1 00:44:30.588 --rc geninfo_unexecuted_blocks=1 00:44:30.588 00:44:30.588 ' 00:44:30.588 11:24:50 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:30.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:30.588 --rc genhtml_branch_coverage=1 00:44:30.588 --rc genhtml_function_coverage=1 00:44:30.588 --rc genhtml_legend=1 00:44:30.588 --rc geninfo_all_blocks=1 00:44:30.588 --rc geninfo_unexecuted_blocks=1 00:44:30.588 00:44:30.588 ' 00:44:30.588 11:24:50 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:30.588 11:24:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:30.588 11:24:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:30.588 11:24:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:30.588 11:24:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:30.588 11:24:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:30.589 11:24:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:30.589 11:24:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:30.589 11:24:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:30.589 11:24:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:30.589 11:24:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.589 11:24:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.589 11:24:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.589 11:24:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:30.589 11:24:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:30.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:30.589 11:24:50 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:30.589 11:24:50 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:30.589 11:24:50 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:30.589 11:24:50 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:30.589 11:24:50 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:30.589 11:24:50 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:30.589 11:24:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:30.589 11:24:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:30.851 11:24:50 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:30.851 11:24:50 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:30.851 11:24:50 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:30.851 11:24:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:38.991 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:38.991 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:38.991 Found net devices under 0000:31:00.0: cvl_0_0 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:38.991 Found net devices under 0000:31:00.1: cvl_0_1 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:38.991 11:24:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:38.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:38.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:44:38.992 00:44:38.992 --- 10.0.0.2 ping statistics --- 00:44:38.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.992 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:38.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:38.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:44:38.992 00:44:38.992 --- 10.0.0.1 ping statistics --- 00:44:38.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.992 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:44:38.992 11:24:57 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:40.906 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:40.906 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:40.906 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:41.167 11:25:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:41.167 11:25:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2245071 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2245071 00:44:41.167 11:25:01 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2245071 ']' 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:41.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:41.167 11:25:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.167 [2024-10-09 11:25:01.120806] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:44:41.167 [2024-10-09 11:25:01.120855] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:41.428 [2024-10-09 11:25:01.256313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:44:41.428 [2024-10-09 11:25:01.287487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:41.428 [2024-10-09 11:25:01.304388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:41.428 [2024-10-09 11:25:01.304415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:41.428 [2024-10-09 11:25:01.304422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:41.428 [2024-10-09 11:25:01.304429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:41.428 [2024-10-09 11:25:01.304435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:41.428 [2024-10-09 11:25:01.305019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:44:42.000 11:25:01 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.000 11:25:01 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:42.000 11:25:01 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:42.000 11:25:01 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.000 [2024-10-09 11:25:01.932981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:42.000 11:25:01 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:42.000 11:25:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.000 ************************************ 00:44:42.000 START TEST fio_dif_1_default 00:44:42.000 ************************************ 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.000 bdev_null0 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.000 11:25:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.260 [2024-10-09 11:25:02.021132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:42.260 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:42.260 { 00:44:42.260 "params": { 00:44:42.260 "name": "Nvme$subsystem", 00:44:42.260 "trtype": "$TEST_TRANSPORT", 00:44:42.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:42.260 "adrfam": "ipv4", 00:44:42.260 "trsvcid": "$NVMF_PORT", 00:44:42.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:42.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:42.260 "hdgst": ${hdgst:-false}, 00:44:42.260 "ddgst": ${ddgst:-false} 00:44:42.260 }, 00:44:42.260 "method": "bdev_nvme_attach_controller" 00:44:42.260 } 00:44:42.260 EOF 00:44:42.260 )") 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:42.261 "params": { 00:44:42.261 "name": "Nvme0", 00:44:42.261 "trtype": "tcp", 00:44:42.261 "traddr": "10.0.0.2", 00:44:42.261 "adrfam": "ipv4", 00:44:42.261 "trsvcid": "4420", 00:44:42.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:42.261 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:42.261 "hdgst": false, 00:44:42.261 "ddgst": false 00:44:42.261 }, 00:44:42.261 "method": "bdev_nvme_attach_controller" 00:44:42.261 }' 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:42.261 11:25:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:42.521 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:42.521 fio-3.35 00:44:42.521 Starting 1 thread 00:44:54.748 00:44:54.748 filename0: (groupid=0, jobs=1): err= 0: pid=2245609: Wed Oct 9 11:25:13 2024 00:44:54.748 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10032msec) 00:44:54.748 slat (nsec): min=5366, max=31641, avg=6200.27, stdev=1571.12 00:44:54.748 clat (usec): min=914, max=44570, avg=40928.41, stdev=2592.79 00:44:54.748 lat (usec): min=920, max=44602, avg=40934.61, stdev=2592.91 00:44:54.748 clat percentiles (usec): 00:44:54.748 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:44:54.748 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:54.748 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:44:54.748 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:44:54.748 | 99.99th=[44827] 00:44:54.748 bw ( KiB/s): min= 384, max= 416, per=99.81%, avg=390.40, stdev=13.13, samples=20 00:44:54.748 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:44:54.748 lat (usec) : 1000=0.41% 00:44:54.748 lat (msec) : 50=99.59% 00:44:54.748 cpu : usr=94.08%, sys=5.71%, ctx=16, majf=0, minf=223 00:44:54.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:54.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:54.748 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:54.748 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:54.748 00:44:54.748 Run status group 0 (all jobs): 00:44:54.748 READ: bw=391KiB/s (400kB/s), 391KiB/s-391KiB/s (400kB/s-400kB/s), io=3920KiB (4014kB), run=10032-10032msec 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 00:44:54.748 real 0m11.197s 00:44:54.748 user 0m22.418s 00:44:54.748 sys 0m0.889s 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 ************************************ 00:44:54.748 END TEST fio_dif_1_default 00:44:54.748 ************************************ 00:44:54.748 11:25:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:54.748 11:25:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:54.748 11:25:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 ************************************ 00:44:54.748 START TEST fio_dif_1_multi_subsystems 00:44:54.748 ************************************ 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 bdev_null0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 [2024-10-09 11:25:13.296338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 bdev_null1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:54.748 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:54.748 { 00:44:54.748 "params": { 00:44:54.748 "name": "Nvme$subsystem", 00:44:54.748 "trtype": "$TEST_TRANSPORT", 00:44:54.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:54.748 "adrfam": "ipv4", 00:44:54.748 "trsvcid": "$NVMF_PORT", 00:44:54.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:54.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:54.749 "hdgst": ${hdgst:-false}, 00:44:54.749 "ddgst": ${ddgst:-false} 00:44:54.749 }, 00:44:54.749 "method": "bdev_nvme_attach_controller" 00:44:54.749 } 00:44:54.749 EOF 00:44:54.749 )") 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:54.749 { 00:44:54.749 "params": { 00:44:54.749 "name": "Nvme$subsystem", 00:44:54.749 "trtype": "$TEST_TRANSPORT", 00:44:54.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:54.749 "adrfam": "ipv4", 00:44:54.749 "trsvcid": "$NVMF_PORT", 00:44:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:54.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:54.749 "hdgst": ${hdgst:-false}, 00:44:54.749 "ddgst": ${ddgst:-false} 00:44:54.749 }, 00:44:54.749 "method": "bdev_nvme_attach_controller" 00:44:54.749 } 00:44:54.749 EOF 00:44:54.749 )") 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:54.749 "params": { 00:44:54.749 "name": "Nvme0", 00:44:54.749 "trtype": "tcp", 00:44:54.749 "traddr": "10.0.0.2", 00:44:54.749 "adrfam": "ipv4", 00:44:54.749 "trsvcid": "4420", 00:44:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:54.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:54.749 "hdgst": false, 00:44:54.749 "ddgst": false 00:44:54.749 }, 00:44:54.749 "method": "bdev_nvme_attach_controller" 00:44:54.749 },{ 00:44:54.749 "params": { 00:44:54.749 "name": "Nvme1", 00:44:54.749 "trtype": "tcp", 00:44:54.749 "traddr": "10.0.0.2", 00:44:54.749 "adrfam": "ipv4", 00:44:54.749 "trsvcid": "4420", 00:44:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:54.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:54.749 "hdgst": false, 00:44:54.749 "ddgst": false 00:44:54.749 }, 00:44:54.749 "method": "bdev_nvme_attach_controller" 00:44:54.749 }' 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:54.749 11:25:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:54.749 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:54.749 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:54.749 fio-3.35 00:44:54.749 Starting 2 threads 00:45:04.746 00:45:04.746 filename0: (groupid=0, jobs=1): err= 0: pid=2248029: Wed Oct 9 11:25:24 2024 00:45:04.746 read: IOPS=190, BW=762KiB/s (780kB/s)(7632KiB/10014msec) 00:45:04.746 slat (nsec): min=5412, max=28135, avg=6454.55, stdev=1721.28 00:45:04.746 clat (usec): min=562, max=42958, avg=20975.03, stdev=20130.94 00:45:04.746 lat (usec): min=570, max=42966, avg=20981.49, stdev=20130.78 00:45:04.746 clat percentiles (usec): 00:45:04.746 | 1.00th=[ 644], 5.00th=[ 717], 10.00th=[ 848], 20.00th=[ 906], 00:45:04.746 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 2245], 60.00th=[41157], 00:45:04.746 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:04.746 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:45:04.746 | 99.99th=[42730] 00:45:04.746 bw ( KiB/s): min= 704, max= 768, per=66.17%, avg=761.60, stdev=19.70, samples=20 00:45:04.746 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:45:04.746 lat (usec) : 750=6.50%, 1000=42.61% 00:45:04.746 lat (msec) : 2=0.79%, 4=0.21%, 50=49.90% 00:45:04.746 cpu : usr=95.53%, sys=4.24%, ctx=11, majf=0, minf=130 00:45:04.746 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:04.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.746 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:04.746 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:04.746 filename1: (groupid=0, jobs=1): err= 0: pid=2248030: Wed Oct 9 11:25:24 2024 00:45:04.746 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10031msec) 00:45:04.746 slat (nsec): min=5381, max=40206, avg=6541.33, stdev=2078.76 00:45:04.746 clat (usec): min=912, max=43601, avg=41090.60, stdev=2639.02 00:45:04.746 lat (usec): min=918, max=43627, avg=41097.14, stdev=2639.10 00:45:04.746 clat percentiles (usec): 00:45:04.746 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:04.746 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:04.746 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:45:04.746 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:45:04.746 | 99.99th=[43779] 00:45:04.746 bw ( KiB/s): min= 384, max= 416, per=33.74%, avg=388.80, stdev=11.72, samples=20 00:45:04.746 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:04.746 lat (usec) : 1000=0.41% 00:45:04.746 lat (msec) : 50=99.59% 00:45:04.746 cpu : usr=95.25%, sys=4.51%, ctx=13, majf=0, minf=195 00:45:04.746 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:04.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:04.746 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:04.746 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:04.746 00:45:04.746 Run status group 0 (all jobs): 00:45:04.746 READ: bw=1150KiB/s (1178kB/s), 389KiB/s-762KiB/s (399kB/s-780kB/s), io=11.3MiB (11.8MB), run=10014-10031msec 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 00:45:05.008 real 0m11.645s 00:45:05.008 user 0m32.838s 00:45:05.008 sys 0m1.204s 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 ************************************ 00:45:05.008 END TEST fio_dif_1_multi_subsystems 00:45:05.008 ************************************ 00:45:05.008 11:25:24 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:05.008 11:25:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:05.008 11:25:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 ************************************ 00:45:05.008 START TEST fio_dif_rand_params 00:45:05.008 ************************************ 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 bdev_null0 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.008 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.008 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:05.008 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.008 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.269 [2024-10-09 11:25:25.021680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:05.269 { 00:45:05.269 "params": { 00:45:05.269 "name": "Nvme$subsystem", 00:45:05.269 "trtype": "$TEST_TRANSPORT", 00:45:05.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:05.269 "adrfam": "ipv4", 00:45:05.269 "trsvcid": "$NVMF_PORT", 00:45:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:05.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:05.269 "hdgst": ${hdgst:-false}, 00:45:05.269 "ddgst": ${ddgst:-false} 00:45:05.269 }, 00:45:05.269 "method": "bdev_nvme_attach_controller" 00:45:05.269 } 00:45:05.269 EOF 00:45:05.269 )") 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:05.269 "params": { 00:45:05.269 "name": "Nvme0", 00:45:05.269 "trtype": "tcp", 00:45:05.269 "traddr": "10.0.0.2", 00:45:05.269 "adrfam": "ipv4", 00:45:05.269 "trsvcid": "4420", 00:45:05.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:05.269 "hdgst": false, 00:45:05.269 "ddgst": false 00:45:05.269 }, 00:45:05.269 "method": "bdev_nvme_attach_controller" 00:45:05.269 }' 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:05.269 11:25:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.530 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:05.530 ... 00:45:05.530 fio-3.35 00:45:05.530 Starting 3 threads 00:45:12.112 00:45:12.112 filename0: (groupid=0, jobs=1): err= 0: pid=2250224: Wed Oct 9 11:25:31 2024 00:45:12.112 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5011msec) 00:45:12.112 slat (nsec): min=5482, max=36166, avg=7560.55, stdev=1741.72 00:45:12.112 clat (usec): min=5343, max=90620, avg=12054.62, stdev=6756.91 00:45:12.112 lat (usec): min=5352, max=90627, avg=12062.18, stdev=6756.72 00:45:12.112 clat percentiles (usec): 00:45:12.112 | 1.00th=[ 6194], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 9372], 00:45:12.112 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:45:12.113 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13960], 95.00th=[14746], 00:45:12.113 | 99.00th=[51119], 99.50th=[52691], 99.90th=[89654], 99.95th=[90702], 00:45:12.113 | 99.99th=[90702] 00:45:12.113 bw ( KiB/s): min=25856, max=39424, per=34.25%, avg=31820.80, stdev=4420.97, samples=10 00:45:12.113 iops : min= 202, max= 308, avg=248.60, stdev=34.54, samples=10 00:45:12.113 lat (msec) : 10=28.25%, 20=69.50%, 50=0.80%, 100=1.44% 00:45:12.113 cpu : usr=94.65%, sys=5.09%, ctx=7, majf=0, minf=77 00:45:12.113 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 issued rwts: total=1246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.113 filename0: (groupid=0, jobs=1): err= 0: pid=2250225: Wed Oct 9 11:25:31 2024 00:45:12.113 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(145MiB/5046msec) 00:45:12.113 slat (nsec): min=5409, max=31233, avg=6615.75, stdev=1332.50 00:45:12.113 clat (usec): min=5547, max=53702, avg=12971.02, stdev=5566.03 00:45:12.113 lat (usec): min=5556, max=53710, avg=12977.64, stdev=5566.34 00:45:12.113 clat percentiles (usec): 00:45:12.113 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[10290], 00:45:12.113 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:45:12.113 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15270], 95.00th=[16057], 00:45:12.113 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:45:12.113 | 99.99th=[53740] 00:45:12.113 bw ( KiB/s): min=20736, max=34560, per=31.97%, avg=29702.20, stdev=3689.08, samples=10 00:45:12.113 iops : min= 162, max= 270, avg=232.00, stdev=28.80, samples=10 00:45:12.113 lat (msec) : 10=17.80%, 20=80.48%, 50=0.34%, 100=1.38% 00:45:12.113 cpu : usr=94.11%, sys=5.65%, ctx=13, majf=0, minf=120 00:45:12.113 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.113 filename0: (groupid=0, jobs=1): err= 0: pid=2250226: Wed Oct 9 11:25:31 2024 00:45:12.113 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(157MiB/5045msec) 00:45:12.113 slat (nsec): min=5404, max=63617, avg=7350.53, stdev=2285.94 00:45:12.113 clat (usec): min=5573, max=52208, avg=12024.87, stdev=8074.91 00:45:12.113 lat (usec): min=5582, max=52215, avg=12032.22, stdev=8075.15 00:45:12.113 clat percentiles (usec): 00:45:12.113 | 1.00th=[ 6849], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[ 9110], 00:45:12.113 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:45:12.113 | 70.00th=[11338], 80.00th=[11863], 90.00th=[12649], 95.00th=[13960], 00:45:12.113 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:45:12.113 | 99.99th=[52167] 00:45:12.113 bw ( KiB/s): min=23040, max=37632, per=34.49%, avg=32051.20, stdev=3960.06, samples=10 00:45:12.113 iops : min= 180, max= 294, avg=250.40, stdev=30.94, samples=10 00:45:12.113 lat (msec) : 10=39.15%, 20=56.62%, 50=2.39%, 100=1.83% 00:45:12.113 cpu : usr=95.70%, sys=4.04%, ctx=7, majf=0, minf=162 00:45:12.113 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.113 issued rwts: total=1254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.113 00:45:12.113 Run status group 0 (all jobs): 00:45:12.113 READ: bw=90.7MiB/s (95.1MB/s), 28.8MiB/s-31.1MiB/s (30.2MB/s-32.6MB/s), io=458MiB (480MB), run=5011-5046msec 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 bdev_null0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 [2024-10-09 11:25:31.337330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 bdev_null1 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.113 bdev_null2 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:12.113 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:12.114 { 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme$subsystem", 00:45:12.114 "trtype": "$TEST_TRANSPORT", 00:45:12.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "$NVMF_PORT", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.114 "hdgst": ${hdgst:-false}, 00:45:12.114 "ddgst": ${ddgst:-false} 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 } 00:45:12.114 EOF 00:45:12.114 )") 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:12.114 { 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme$subsystem", 00:45:12.114 "trtype": "$TEST_TRANSPORT", 00:45:12.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "$NVMF_PORT", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.114 "hdgst": ${hdgst:-false}, 00:45:12.114 "ddgst": ${ddgst:-false} 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 } 00:45:12.114 EOF 00:45:12.114 )") 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:12.114 { 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme$subsystem", 00:45:12.114 "trtype": "$TEST_TRANSPORT", 00:45:12.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "$NVMF_PORT", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.114 "hdgst": ${hdgst:-false}, 00:45:12.114 "ddgst": ${ddgst:-false} 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 } 00:45:12.114 EOF 00:45:12.114 )") 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme0", 00:45:12.114 "trtype": "tcp", 00:45:12.114 "traddr": "10.0.0.2", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "4420", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:12.114 "hdgst": false, 00:45:12.114 "ddgst": false 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 },{ 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme1", 00:45:12.114 "trtype": "tcp", 00:45:12.114 "traddr": "10.0.0.2", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "4420", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:12.114 "hdgst": false, 00:45:12.114 "ddgst": false 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 },{ 00:45:12.114 "params": { 00:45:12.114 "name": "Nvme2", 00:45:12.114 "trtype": "tcp", 00:45:12.114 "traddr": "10.0.0.2", 00:45:12.114 "adrfam": "ipv4", 00:45:12.114 "trsvcid": "4420", 00:45:12.114 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:12.114 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:12.114 "hdgst": false, 00:45:12.114 "ddgst": false 00:45:12.114 }, 00:45:12.114 "method": "bdev_nvme_attach_controller" 00:45:12.114 }' 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:12.114 11:25:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.114 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.114 ... 00:45:12.114 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.114 ... 00:45:12.114 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.114 ... 00:45:12.114 fio-3.35 00:45:12.114 Starting 24 threads 00:45:24.349 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251730: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=537, BW=2149KiB/s (2201kB/s)(21.0MiB/10006msec) 00:45:24.349 slat (nsec): min=5386, max=80519, avg=7797.62, stdev=4699.41 00:45:24.349 clat (usec): min=1248, max=56482, avg=29711.98, stdev=5839.79 00:45:24.349 lat (usec): min=1266, max=56488, avg=29719.78, stdev=5839.13 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[ 5407], 5.00th=[20317], 10.00th=[21627], 20.00th=[23200], 00:45:24.349 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:45:24.349 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:45:24.349 | 99.00th=[35390], 99.50th=[42206], 99.90th=[52167], 99.95th=[52167], 00:45:24.349 | 99.99th=[56361] 00:45:24.349 bw ( KiB/s): min= 1920, max= 2688, per=4.51%, avg=2142.32, stdev=221.05, samples=19 00:45:24.349 iops : min= 480, max= 672, avg=535.58, stdev=55.26, samples=19 00:45:24.349 lat (msec) : 2=0.02%, 4=0.87%, 10=0.60%, 20=3.05%, 50=95.31% 00:45:24.349 lat (msec) : 100=0.15% 00:45:24.349 cpu : usr=98.73%, sys=0.97%, ctx=13, majf=0, minf=39 00:45:24.349 IO depths : 1=5.6%, 2=11.5%, 4=24.1%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251731: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=487, BW=1950KiB/s (1997kB/s)(19.1MiB/10043msec) 00:45:24.349 slat (nsec): min=5408, max=79487, avg=21022.40, stdev=13007.66 00:45:24.349 clat (usec): min=11255, max=54727, avg=32524.87, stdev=1572.15 00:45:24.349 lat (usec): min=11265, max=54735, avg=32545.89, stdev=1571.79 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[25297], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.349 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.349 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.349 | 99.00th=[35390], 99.50th=[36963], 99.90th=[40633], 99.95th=[51119], 00:45:24.349 | 99.99th=[54789] 00:45:24.349 bw ( KiB/s): min= 1853, max= 2048, per=4.11%, avg=1954.25, stdev=61.52, samples=20 00:45:24.349 iops : min= 463, max= 512, avg=488.55, stdev=15.40, samples=20 00:45:24.349 lat (msec) : 20=0.33%, 50=99.59%, 100=0.08% 00:45:24.349 cpu : usr=98.95%, sys=0.73%, ctx=14, majf=0, minf=31 00:45:24.349 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:45:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251732: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=494, BW=1979KiB/s (2026kB/s)(19.3MiB/10007msec) 00:45:24.349 slat (nsec): min=5384, max=75874, avg=12622.63, stdev=9837.52 00:45:24.349 clat (usec): min=11001, max=54052, avg=32271.54, stdev=4392.37 00:45:24.349 lat (usec): min=11007, max=54072, avg=32284.17, stdev=4392.48 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[18482], 5.00th=[25297], 10.00th=[27132], 20.00th=[31851], 00:45:24.349 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.349 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[40109], 00:45:24.349 | 99.00th=[47973], 99.50th=[50070], 99.90th=[54264], 99.95th=[54264], 00:45:24.349 | 99.99th=[54264] 00:45:24.349 bw ( KiB/s): min= 1795, max= 2080, per=4.15%, avg=1969.00, stdev=59.01, samples=19 00:45:24.349 iops : min= 448, max= 520, avg=492.21, stdev=14.88, samples=19 00:45:24.349 lat (msec) : 20=1.33%, 50=98.10%, 100=0.57% 00:45:24.349 cpu : usr=98.95%, sys=0.73%, ctx=14, majf=0, minf=27 00:45:24.349 IO depths : 1=1.1%, 2=2.2%, 4=6.2%, 8=75.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:45:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 complete : 0=0.0%, 4=89.9%, 8=7.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251733: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=492, BW=1969KiB/s (2017kB/s)(19.3MiB/10013msec) 00:45:24.349 slat (nsec): min=5386, max=80276, avg=14135.88, stdev=11536.46 00:45:24.349 clat (usec): min=15840, max=47366, avg=32381.38, stdev=2310.90 00:45:24.349 lat (usec): min=15846, max=47372, avg=32395.52, stdev=2311.09 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.349 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.349 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.349 | 99.00th=[37487], 99.50th=[43779], 99.90th=[46924], 99.95th=[47449], 00:45:24.349 | 99.99th=[47449] 00:45:24.349 bw ( KiB/s): min= 1792, max= 2192, per=4.13%, avg=1961.26, stdev=88.17, samples=19 00:45:24.349 iops : min= 448, max= 548, avg=490.32, stdev=22.04, samples=19 00:45:24.349 lat (msec) : 20=0.65%, 50=99.35% 00:45:24.349 cpu : usr=98.89%, sys=0.79%, ctx=16, majf=0, minf=31 00:45:24.349 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251734: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10006msec) 00:45:24.349 slat (usec): min=5, max=130, avg= 9.80, stdev= 7.73 00:45:24.349 clat (usec): min=13262, max=38112, avg=32296.84, stdev=2357.92 00:45:24.349 lat (usec): min=13291, max=38121, avg=32306.64, stdev=2356.06 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[18220], 5.00th=[31589], 10.00th=[32113], 20.00th=[32113], 00:45:24.349 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.349 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.349 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:45:24.349 | 99.99th=[38011] 00:45:24.349 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=1973.89, stdev=77.69, samples=19 00:45:24.349 iops : min= 480, max= 544, avg=493.47, stdev=19.42, samples=19 00:45:24.349 lat (msec) : 20=1.62%, 50=98.38% 00:45:24.349 cpu : usr=99.00%, sys=0.67%, ctx=16, majf=0, minf=25 00:45:24.349 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:24.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.349 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.349 filename0: (groupid=0, jobs=1): err= 0: pid=2251735: Wed Oct 9 11:25:43 2024 00:45:24.349 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.5MiB/10047msec) 00:45:24.349 slat (nsec): min=5384, max=85100, avg=13120.74, stdev=11305.21 00:45:24.349 clat (usec): min=10286, max=69262, avg=32140.58, stdev=4594.88 00:45:24.349 lat (usec): min=10292, max=69282, avg=32153.70, stdev=4594.88 00:45:24.349 clat percentiles (usec): 00:45:24.349 | 1.00th=[21365], 5.00th=[24773], 10.00th=[26346], 20.00th=[28705], 00:45:24.349 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32900], 00:45:24.349 | 70.00th=[33162], 80.00th=[33817], 90.00th=[36963], 95.00th=[39060], 00:45:24.349 | 99.00th=[42206], 99.50th=[47449], 99.90th=[69731], 99.95th=[69731], 00:45:24.349 | 99.99th=[69731] 00:45:24.349 bw ( KiB/s): min= 1788, max= 2112, per=4.19%, avg=1989.40, stdev=72.75, samples=20 00:45:24.349 iops : min= 447, max= 528, avg=497.35, stdev=18.19, samples=20 00:45:24.349 lat (msec) : 20=0.52%, 50=99.00%, 100=0.48% 00:45:24.349 cpu : usr=98.86%, sys=0.83%, ctx=14, majf=0, minf=25 00:45:24.349 IO depths : 1=0.7%, 2=1.4%, 4=4.8%, 8=77.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=89.5%, 8=8.3%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename0: (groupid=0, jobs=1): err= 0: pid=2251736: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10012msec) 00:45:24.350 slat (nsec): min=5379, max=71792, avg=10034.55, stdev=6897.30 00:45:24.350 clat (usec): min=11222, max=54205, avg=32307.62, stdev=3005.27 00:45:24.350 lat (usec): min=11235, max=54213, avg=32317.66, stdev=3005.47 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[18744], 5.00th=[29492], 10.00th=[31851], 20.00th=[32113], 00:45:24.350 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.350 | 99.00th=[44303], 99.50th=[45351], 99.90th=[53740], 99.95th=[54264], 00:45:24.350 | 99.99th=[54264] 00:45:24.350 bw ( KiB/s): min= 1920, max= 2144, per=4.16%, avg=1974.74, stdev=76.79, samples=19 00:45:24.350 iops : min= 480, max= 536, avg=493.68, stdev=19.20, samples=19 00:45:24.350 lat (msec) : 20=1.31%, 50=98.56%, 100=0.12% 00:45:24.350 cpu : usr=98.94%, sys=0.75%, ctx=14, majf=0, minf=37 00:45:24.350 IO depths : 1=5.5%, 2=11.0%, 4=23.1%, 8=53.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename0: (groupid=0, jobs=1): err= 0: pid=2251737: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=495, BW=1980KiB/s (2028kB/s)(19.4MiB/10008msec) 00:45:24.350 slat (nsec): min=5389, max=94087, avg=20594.63, stdev=16078.76 00:45:24.350 clat (usec): min=14322, max=47751, avg=32155.97, stdev=3039.37 00:45:24.350 lat (usec): min=14330, max=47777, avg=32176.56, stdev=3040.41 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[20579], 5.00th=[25822], 10.00th=[29754], 20.00th=[31851], 00:45:24.350 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34866], 00:45:24.350 | 99.00th=[41681], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:45:24.350 | 99.99th=[47973] 00:45:24.350 bw ( KiB/s): min= 1848, max= 2160, per=4.16%, avg=1978.26, stdev=82.76, samples=19 00:45:24.350 iops : min= 462, max= 540, avg=494.53, stdev=20.72, samples=19 00:45:24.350 lat (msec) : 20=0.77%, 50=99.23% 00:45:24.350 cpu : usr=98.85%, sys=0.82%, ctx=20, majf=0, minf=27 00:45:24.350 IO depths : 1=4.3%, 2=8.7%, 4=18.4%, 8=59.3%, 16=9.2%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=92.5%, 8=2.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251738: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10006msec) 00:45:24.350 slat (usec): min=5, max=112, avg=23.88, stdev=17.21 00:45:24.350 clat (usec): min=8954, max=63172, avg=32443.73, stdev=2759.49 00:45:24.350 lat (usec): min=8963, max=63204, avg=32467.61, stdev=2759.51 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[24249], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.350 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.350 | 99.00th=[36963], 99.50th=[47973], 99.90th=[63177], 99.95th=[63177], 00:45:24.350 | 99.99th=[63177] 00:45:24.350 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1947.79, stdev=66.15, samples=19 00:45:24.350 iops : min= 448, max= 512, avg=486.95, stdev=16.54, samples=19 00:45:24.350 lat (msec) : 10=0.33%, 20=0.33%, 50=99.02%, 100=0.33% 00:45:24.350 cpu : usr=98.60%, sys=0.87%, ctx=91, majf=0, minf=30 00:45:24.350 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251739: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=553, BW=2213KiB/s (2266kB/s)(21.6MiB/10017msec) 00:45:24.350 slat (nsec): min=5393, max=89225, avg=10674.55, stdev=8477.48 00:45:24.350 clat (usec): min=2906, max=54897, avg=28847.67, stdev=6544.26 00:45:24.350 lat (usec): min=2924, max=54906, avg=28858.34, stdev=6545.30 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[ 4948], 5.00th=[18482], 10.00th=[20841], 20.00th=[22938], 00:45:24.350 | 30.00th=[24249], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:45:24.350 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[33817], 00:45:24.350 | 99.00th=[43779], 99.50th=[46924], 99.90th=[54789], 99.95th=[54789], 00:45:24.350 | 99.99th=[54789] 00:45:24.350 bw ( KiB/s): min= 1920, max= 2912, per=4.65%, avg=2210.40, stdev=313.76, samples=20 00:45:24.350 iops : min= 480, max= 728, avg=552.60, stdev=78.44, samples=20 00:45:24.350 lat (msec) : 4=0.87%, 10=0.69%, 20=7.07%, 50=91.23%, 100=0.14% 00:45:24.350 cpu : usr=99.30%, sys=0.40%, ctx=18, majf=0, minf=42 00:45:24.350 IO depths : 1=2.2%, 2=4.9%, 4=14.3%, 8=68.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=91.1%, 8=3.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=5542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251740: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=514, BW=2056KiB/s (2106kB/s)(20.1MiB/10021msec) 00:45:24.350 slat (nsec): min=5405, max=96289, avg=10987.73, stdev=9851.84 00:45:24.350 clat (usec): min=13956, max=35655, avg=31026.75, stdev=3994.88 00:45:24.350 lat (usec): min=13969, max=35664, avg=31037.73, stdev=3995.42 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[16581], 5.00th=[21365], 10.00th=[22938], 20.00th=[31851], 00:45:24.350 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:45:24.350 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:45:24.350 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:45:24.350 | 99.99th=[35914] 00:45:24.350 bw ( KiB/s): min= 1920, max= 2304, per=4.32%, avg=2054.40, stdev=120.90, samples=20 00:45:24.350 iops : min= 480, max= 576, avg=513.60, stdev=30.22, samples=20 00:45:24.350 lat (msec) : 20=2.43%, 50=97.57% 00:45:24.350 cpu : usr=98.89%, sys=0.75%, ctx=25, majf=0, minf=99 00:45:24.350 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251741: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10012msec) 00:45:24.350 slat (nsec): min=5616, max=81471, avg=16599.59, stdev=11851.15 00:45:24.350 clat (usec): min=20412, max=45971, avg=32569.06, stdev=1535.04 00:45:24.350 lat (usec): min=20420, max=45993, avg=32585.66, stdev=1534.73 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[30016], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.350 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.350 | 99.00th=[34866], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:45:24.350 | 99.99th=[45876] 00:45:24.350 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1953.84, stdev=57.82, samples=19 00:45:24.350 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:45:24.350 lat (msec) : 50=100.00% 00:45:24.350 cpu : usr=98.57%, sys=0.90%, ctx=88, majf=0, minf=30 00:45:24.350 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251742: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:45:24.350 slat (nsec): min=5663, max=92712, avg=21843.42, stdev=15245.90 00:45:24.350 clat (usec): min=9977, max=63625, avg=32594.45, stdev=2587.11 00:45:24.350 lat (usec): min=9983, max=63643, avg=32616.29, stdev=2587.05 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[21890], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.350 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.350 | 99.00th=[42730], 99.50th=[44827], 99.90th=[63701], 99.95th=[63701], 00:45:24.350 | 99.99th=[63701] 00:45:24.350 bw ( KiB/s): min= 1795, max= 2048, per=4.08%, avg=1940.37, stdev=63.80, samples=19 00:45:24.350 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:45:24.350 lat (msec) : 10=0.08%, 20=0.57%, 50=99.02%, 100=0.33% 00:45:24.350 cpu : usr=99.11%, sys=0.54%, ctx=53, majf=0, minf=26 00:45:24.350 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.350 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.350 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.350 filename1: (groupid=0, jobs=1): err= 0: pid=2251743: Wed Oct 9 11:25:43 2024 00:45:24.350 read: IOPS=489, BW=1956KiB/s (2003kB/s)(19.1MiB/10012msec) 00:45:24.350 slat (nsec): min=5568, max=93710, avg=22016.06, stdev=15549.27 00:45:24.350 clat (usec): min=15060, max=50079, avg=32509.46, stdev=1507.55 00:45:24.350 lat (usec): min=15069, max=50088, avg=32531.47, stdev=1507.12 00:45:24.350 clat percentiles (usec): 00:45:24.350 | 1.00th=[29230], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.350 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.350 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.350 | 99.00th=[34866], 99.50th=[35390], 99.90th=[47973], 99.95th=[47973], 00:45:24.350 | 99.99th=[50070] 00:45:24.350 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1953.68, stdev=57.91, samples=19 00:45:24.350 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:45:24.350 lat (msec) : 20=0.33%, 50=99.63%, 100=0.04% 00:45:24.350 cpu : usr=98.64%, sys=0.89%, ctx=66, majf=0, minf=30 00:45:24.350 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename1: (groupid=0, jobs=1): err= 0: pid=2251744: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10006msec) 00:45:24.351 slat (usec): min=5, max=106, avg=22.91, stdev=15.12 00:45:24.351 clat (usec): min=6413, max=63290, avg=32477.47, stdev=2746.25 00:45:24.351 lat (usec): min=6419, max=63309, avg=32500.38, stdev=2746.79 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[35390], 99.50th=[37487], 99.90th=[63177], 99.95th=[63177], 00:45:24.351 | 99.99th=[63177] 00:45:24.351 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1946.95, stdev=68.52, samples=19 00:45:24.351 iops : min= 448, max= 512, avg=486.74, stdev=17.13, samples=19 00:45:24.351 lat (msec) : 10=0.37%, 20=0.57%, 50=98.73%, 100=0.33% 00:45:24.351 cpu : usr=99.02%, sys=0.66%, ctx=28, majf=0, minf=26 00:45:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename1: (groupid=0, jobs=1): err= 0: pid=2251745: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:45:24.351 slat (nsec): min=5484, max=97211, avg=24052.92, stdev=15591.44 00:45:24.351 clat (usec): min=13143, max=35590, avg=32375.11, stdev=1743.39 00:45:24.351 lat (usec): min=13160, max=35598, avg=32399.17, stdev=1742.19 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[25035], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:45:24.351 | 99.99th=[35390] 00:45:24.351 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1960.42, stdev=61.13, samples=19 00:45:24.351 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:45:24.351 lat (msec) : 20=0.75%, 50=99.25% 00:45:24.351 cpu : usr=99.01%, sys=0.60%, ctx=63, majf=0, minf=29 00:45:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251746: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10012msec) 00:45:24.351 slat (nsec): min=5403, max=87564, avg=17596.61, stdev=13138.85 00:45:24.351 clat (usec): min=17068, max=52416, avg=32415.19, stdev=1907.02 00:45:24.351 lat (usec): min=17076, max=52468, avg=32432.78, stdev=1907.53 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[34866], 99.50th=[34866], 99.90th=[52167], 99.95th=[52167], 00:45:24.351 | 99.99th=[52167] 00:45:24.351 bw ( KiB/s): min= 1920, max= 2096, per=4.13%, avg=1962.95, stdev=65.77, samples=19 00:45:24.351 iops : min= 480, max= 524, avg=490.74, stdev=16.44, samples=19 00:45:24.351 lat (msec) : 20=0.94%, 50=98.90%, 100=0.16% 00:45:24.351 cpu : usr=98.85%, sys=0.75%, ctx=25, majf=0, minf=27 00:45:24.351 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251747: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10006msec) 00:45:24.351 slat (nsec): min=5556, max=84108, avg=23588.56, stdev=15738.64 00:45:24.351 clat (usec): min=7315, max=35482, avg=32292.32, stdev=2324.57 00:45:24.351 lat (usec): min=7339, max=35491, avg=32315.91, stdev=2324.23 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[16450], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:45:24.351 | 99.99th=[35390] 00:45:24.351 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=1967.16, stdev=76.45, samples=19 00:45:24.351 iops : min= 480, max= 544, avg=491.79, stdev=19.11, samples=19 00:45:24.351 lat (msec) : 10=0.20%, 20=1.10%, 50=98.70% 00:45:24.351 cpu : usr=98.46%, sys=0.94%, ctx=73, majf=0, minf=27 00:45:24.351 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251748: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10019msec) 00:45:24.351 slat (nsec): min=5564, max=77707, avg=21843.86, stdev=12789.48 00:45:24.351 clat (usec): min=19459, max=44471, avg=32537.77, stdev=1272.15 00:45:24.351 lat (usec): min=19467, max=44503, avg=32559.62, stdev=1272.31 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[34866], 99.50th=[35390], 99.90th=[44303], 99.95th=[44303], 00:45:24.351 | 99.99th=[44303] 00:45:24.351 bw ( KiB/s): min= 1920, max= 2048, per=4.11%, avg=1953.68, stdev=57.91, samples=19 00:45:24.351 iops : min= 480, max= 512, avg=488.42, stdev=14.48, samples=19 00:45:24.351 lat (msec) : 20=0.33%, 50=99.67% 00:45:24.351 cpu : usr=99.05%, sys=0.63%, ctx=17, majf=0, minf=31 00:45:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251749: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:45:24.351 slat (usec): min=5, max=134, avg=11.72, stdev= 9.25 00:45:24.351 clat (usec): min=13183, max=35567, avg=32496.03, stdev=1753.10 00:45:24.351 lat (usec): min=13196, max=35573, avg=32507.75, stdev=1750.96 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[25297], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:45:24.351 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.351 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:45:24.351 | 99.99th=[35390] 00:45:24.351 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1960.42, stdev=61.13, samples=19 00:45:24.351 iops : min= 480, max= 512, avg=490.11, stdev=15.28, samples=19 00:45:24.351 lat (msec) : 20=0.98%, 50=99.02% 00:45:24.351 cpu : usr=98.93%, sys=0.75%, ctx=16, majf=0, minf=24 00:45:24.351 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251750: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10008msec) 00:45:24.351 slat (nsec): min=5385, max=74207, avg=15816.19, stdev=11321.39 00:45:24.351 clat (usec): min=8932, max=59007, avg=32641.85, stdev=3262.50 00:45:24.351 lat (usec): min=8940, max=59015, avg=32657.67, stdev=3262.27 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[21890], 5.00th=[31065], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:45:24.351 | 99.00th=[44827], 99.50th=[47449], 99.90th=[58983], 99.95th=[58983], 00:45:24.351 | 99.99th=[58983] 00:45:24.351 bw ( KiB/s): min= 1792, max= 2064, per=4.09%, avg=1943.58, stdev=73.78, samples=19 00:45:24.351 iops : min= 448, max= 516, avg=485.89, stdev=18.44, samples=19 00:45:24.351 lat (msec) : 10=0.33%, 20=0.33%, 50=98.90%, 100=0.45% 00:45:24.351 cpu : usr=99.11%, sys=0.56%, ctx=15, majf=0, minf=34 00:45:24.351 IO depths : 1=4.5%, 2=9.2%, 4=20.4%, 8=57.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.351 issued rwts: total=4888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.351 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.351 filename2: (groupid=0, jobs=1): err= 0: pid=2251751: Wed Oct 9 11:25:43 2024 00:45:24.351 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10016msec) 00:45:24.351 slat (nsec): min=5402, max=76135, avg=18596.68, stdev=12151.07 00:45:24.351 clat (usec): min=15597, max=63990, avg=32574.92, stdev=1931.82 00:45:24.351 lat (usec): min=15612, max=64013, avg=32593.51, stdev=1931.43 00:45:24.351 clat percentiles (usec): 00:45:24.351 | 1.00th=[24511], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.351 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.351 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:45:24.351 | 99.00th=[35914], 99.50th=[42206], 99.90th=[49021], 99.95th=[63701], 00:45:24.351 | 99.99th=[64226] 00:45:24.351 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1952.00, stdev=70.42, samples=20 00:45:24.351 iops : min= 448, max= 512, avg=488.00, stdev=17.60, samples=20 00:45:24.351 lat (msec) : 20=0.33%, 50=99.61%, 100=0.06% 00:45:24.351 cpu : usr=98.93%, sys=0.74%, ctx=17, majf=0, minf=25 00:45:24.351 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.352 filename2: (groupid=0, jobs=1): err= 0: pid=2251752: Wed Oct 9 11:25:43 2024 00:45:24.352 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10014msec) 00:45:24.352 slat (nsec): min=5395, max=78895, avg=11770.68, stdev=11426.58 00:45:24.352 clat (usec): min=15975, max=43962, avg=32521.17, stdev=1600.06 00:45:24.352 lat (usec): min=15983, max=43968, avg=32532.94, stdev=1599.04 00:45:24.352 clat percentiles (usec): 00:45:24.352 | 1.00th=[23200], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:45:24.352 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.352 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.352 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[37487], 00:45:24.352 | 99.99th=[43779] 00:45:24.352 bw ( KiB/s): min= 1920, max= 2052, per=4.12%, avg=1958.60, stdev=58.92, samples=20 00:45:24.352 iops : min= 480, max= 513, avg=489.65, stdev=14.73, samples=20 00:45:24.352 lat (msec) : 20=0.69%, 50=99.31% 00:45:24.352 cpu : usr=99.02%, sys=0.66%, ctx=16, majf=0, minf=27 00:45:24.352 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:24.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.352 filename2: (groupid=0, jobs=1): err= 0: pid=2251753: Wed Oct 9 11:25:43 2024 00:45:24.352 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10006msec) 00:45:24.352 slat (nsec): min=5388, max=96754, avg=20751.81, stdev=15736.15 00:45:24.352 clat (usec): min=6390, max=63164, avg=32514.74, stdev=3041.51 00:45:24.352 lat (usec): min=6398, max=63191, avg=32535.49, stdev=3041.75 00:45:24.352 clat percentiles (usec): 00:45:24.352 | 1.00th=[22676], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:45:24.352 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:45:24.352 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[33817], 00:45:24.352 | 99.00th=[41157], 99.50th=[49021], 99.90th=[63177], 99.95th=[63177], 00:45:24.352 | 99.99th=[63177] 00:45:24.352 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1947.79, stdev=64.19, samples=19 00:45:24.352 iops : min= 448, max= 512, avg=486.95, stdev=16.05, samples=19 00:45:24.352 lat (msec) : 10=0.20%, 20=0.45%, 50=98.98%, 100=0.37% 00:45:24.352 cpu : usr=98.97%, sys=0.71%, ctx=15, majf=0, minf=28 00:45:24.352 IO depths : 1=4.0%, 2=8.1%, 4=16.7%, 8=60.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:45:24.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 complete : 0=0.0%, 4=92.5%, 8=3.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:24.352 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:24.352 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:24.352 00:45:24.352 Run status group 0 (all jobs): 00:45:24.352 READ: bw=46.4MiB/s (48.6MB/s), 1950KiB/s-2213KiB/s (1997kB/s-2266kB/s), io=466MiB (489MB), run=10004-10047msec 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 bdev_null0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 [2024-10-09 11:25:43.270659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 bdev_null1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.352 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:24.353 { 00:45:24.353 "params": { 00:45:24.353 "name": "Nvme$subsystem", 00:45:24.353 "trtype": "$TEST_TRANSPORT", 00:45:24.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:24.353 "adrfam": "ipv4", 00:45:24.353 "trsvcid": "$NVMF_PORT", 00:45:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:24.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:24.353 "hdgst": ${hdgst:-false}, 00:45:24.353 "ddgst": ${ddgst:-false} 00:45:24.353 }, 00:45:24.353 "method": "bdev_nvme_attach_controller" 00:45:24.353 } 00:45:24.353 EOF 00:45:24.353 )") 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:24.353 { 00:45:24.353 "params": { 00:45:24.353 "name": "Nvme$subsystem", 00:45:24.353 "trtype": "$TEST_TRANSPORT", 00:45:24.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:24.353 "adrfam": "ipv4", 00:45:24.353 "trsvcid": "$NVMF_PORT", 00:45:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:24.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:24.353 "hdgst": ${hdgst:-false}, 00:45:24.353 "ddgst": ${ddgst:-false} 00:45:24.353 }, 00:45:24.353 "method": "bdev_nvme_attach_controller" 00:45:24.353 } 00:45:24.353 EOF 00:45:24.353 )") 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:24.353 "params": { 00:45:24.353 "name": "Nvme0", 00:45:24.353 "trtype": "tcp", 00:45:24.353 "traddr": "10.0.0.2", 00:45:24.353 "adrfam": "ipv4", 00:45:24.353 "trsvcid": "4420", 00:45:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:24.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:24.353 "hdgst": false, 00:45:24.353 "ddgst": false 00:45:24.353 }, 00:45:24.353 "method": "bdev_nvme_attach_controller" 00:45:24.353 },{ 00:45:24.353 "params": { 00:45:24.353 "name": "Nvme1", 00:45:24.353 "trtype": "tcp", 00:45:24.353 "traddr": "10.0.0.2", 00:45:24.353 "adrfam": "ipv4", 00:45:24.353 "trsvcid": "4420", 00:45:24.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:24.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:24.353 "hdgst": false, 00:45:24.353 "ddgst": false 00:45:24.353 }, 00:45:24.353 "method": "bdev_nvme_attach_controller" 00:45:24.353 }' 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:24.353 11:25:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.353 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:24.353 ... 00:45:24.353 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:24.353 ... 00:45:24.353 fio-3.35 00:45:24.353 Starting 4 threads 00:45:29.639 00:45:29.639 filename0: (groupid=0, jobs=1): err= 0: pid=2254021: Wed Oct 9 11:25:49 2024 00:45:29.639 read: IOPS=2029, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5005msec) 00:45:29.639 slat (nsec): min=5389, max=38789, avg=7942.17, stdev=2461.25 00:45:29.639 clat (usec): min=783, max=6336, avg=3923.68, stdev=404.50 00:45:29.639 lat (usec): min=802, max=6342, avg=3931.63, stdev=404.33 00:45:29.639 clat percentiles (usec): 00:45:29.639 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:45:29.639 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3818], 60.00th=[ 3884], 00:45:29.639 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4555], 00:45:29.639 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6063], 99.95th=[ 6128], 00:45:29.639 | 99.99th=[ 6325] 00:45:29.639 bw ( KiB/s): min=15520, max=16672, per=24.21%, avg=16243.20, stdev=288.28, samples=10 00:45:29.639 iops : min= 1940, max= 2084, avg=2030.40, stdev=36.03, samples=10 00:45:29.639 lat (usec) : 1000=0.01% 00:45:29.639 lat (msec) : 2=0.02%, 4=67.52%, 10=32.45% 00:45:29.639 cpu : usr=96.70%, sys=3.08%, ctx=7, majf=0, minf=0 00:45:29.639 IO depths : 1=0.1%, 2=0.1%, 4=64.7%, 8=35.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 issued rwts: total=10157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:29.639 filename0: (groupid=0, jobs=1): err= 0: pid=2254022: Wed Oct 9 11:25:49 2024 00:45:29.639 read: IOPS=1924, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5002msec) 00:45:29.639 slat (nsec): min=5386, max=26017, avg=6220.12, stdev=1855.05 00:45:29.639 clat (usec): min=956, max=6573, avg=4139.36, stdev=710.47 00:45:29.639 lat (usec): min=962, max=6578, avg=4145.58, stdev=710.34 00:45:29.639 clat percentiles (usec): 00:45:29.639 | 1.00th=[ 3228], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3720], 00:45:29.639 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 4015], 00:45:29.639 | 70.00th=[ 4146], 80.00th=[ 4359], 90.00th=[ 5538], 95.00th=[ 5997], 00:45:29.639 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6325], 99.95th=[ 6456], 00:45:29.639 | 99.99th=[ 6587] 00:45:29.639 bw ( KiB/s): min=15184, max=15568, per=22.89%, avg=15360.00, stdev=140.17, samples=9 00:45:29.639 iops : min= 1898, max= 1946, avg=1920.00, stdev=17.52, samples=9 00:45:29.639 lat (usec) : 1000=0.02% 00:45:29.639 lat (msec) : 2=0.15%, 4=59.04%, 10=40.80% 00:45:29.639 cpu : usr=94.72%, sys=3.82%, ctx=195, majf=0, minf=0 00:45:29.639 IO depths : 1=0.1%, 2=0.1%, 4=73.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 issued rwts: total=9626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:29.639 filename1: (groupid=0, jobs=1): err= 0: pid=2254023: Wed Oct 9 11:25:49 2024 00:45:29.639 read: IOPS=2020, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:45:29.639 slat (nsec): min=5384, max=34337, avg=8129.25, stdev=2382.94 00:45:29.639 clat (usec): min=1585, max=7809, avg=3936.97, stdev=424.40 00:45:29.639 lat (usec): min=1590, max=7838, avg=3945.10, stdev=424.26 00:45:29.639 clat percentiles (usec): 00:45:29.639 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3654], 00:45:29.639 | 30.00th=[ 3785], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3916], 00:45:29.639 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4555], 00:45:29.639 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6390], 99.95th=[ 7767], 00:45:29.639 | 99.99th=[ 7767] 00:45:29.639 bw ( KiB/s): min=15744, max=16352, per=24.11%, avg=16177.78, stdev=181.10, samples=9 00:45:29.639 iops : min= 1968, max= 2044, avg=2022.22, stdev=22.64, samples=9 00:45:29.639 lat (msec) : 2=0.06%, 4=66.01%, 10=33.93% 00:45:29.639 cpu : usr=97.26%, sys=2.52%, ctx=6, majf=0, minf=2 00:45:29.639 IO depths : 1=0.1%, 2=0.1%, 4=73.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 issued rwts: total=10105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:29.639 filename1: (groupid=0, jobs=1): err= 0: pid=2254024: Wed Oct 9 11:25:49 2024 00:45:29.639 read: IOPS=2416, BW=18.9MiB/s (19.8MB/s)(94.4MiB/5003msec) 00:45:29.639 slat (nsec): min=5379, max=35890, avg=8053.74, stdev=2115.95 00:45:29.639 clat (usec): min=1702, max=5633, avg=3287.68, stdev=493.33 00:45:29.639 lat (usec): min=1710, max=5641, avg=3295.74, stdev=493.40 00:45:29.639 clat percentiles (usec): 00:45:29.639 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2868], 00:45:29.639 | 30.00th=[ 2999], 40.00th=[ 3097], 50.00th=[ 3195], 60.00th=[ 3359], 00:45:29.639 | 70.00th=[ 3556], 80.00th=[ 3752], 90.00th=[ 3818], 95.00th=[ 3949], 00:45:29.639 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5407], 00:45:29.639 | 99.99th=[ 5604] 00:45:29.639 bw ( KiB/s): min=18384, max=19616, per=28.71%, avg=19260.44, stdev=391.90, samples=9 00:45:29.639 iops : min= 2298, max= 2452, avg=2407.56, stdev=48.99, samples=9 00:45:29.639 lat (msec) : 2=0.39%, 4=94.90%, 10=4.71% 00:45:29.639 cpu : usr=97.10%, sys=2.64%, ctx=10, majf=0, minf=9 00:45:29.639 IO depths : 1=0.2%, 2=6.7%, 4=62.4%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:29.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.639 issued rwts: total=12089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.639 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:29.639 00:45:29.639 Run status group 0 (all jobs): 00:45:29.639 READ: bw=65.5MiB/s (68.7MB/s), 15.0MiB/s-18.9MiB/s (15.8MB/s-19.8MB/s), io=328MiB (344MB), run=5002-5005msec 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 00:45:29.900 real 0m24.802s 00:45:29.900 user 5m11.962s 00:45:29.900 sys 0m4.236s 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 ************************************ 00:45:29.900 END TEST fio_dif_rand_params 00:45:29.900 ************************************ 00:45:29.900 11:25:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:29.900 11:25:49 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:29.900 11:25:49 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 ************************************ 00:45:29.900 START TEST fio_dif_digest 00:45:29.900 ************************************ 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 bdev_null0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:29.900 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:30.161 [2024-10-09 11:25:49.905443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:30.161 { 00:45:30.161 "params": { 00:45:30.161 "name": "Nvme$subsystem", 00:45:30.161 "trtype": "$TEST_TRANSPORT", 00:45:30.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:30.161 "adrfam": "ipv4", 00:45:30.161 "trsvcid": "$NVMF_PORT", 00:45:30.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:30.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:30.161 "hdgst": ${hdgst:-false}, 00:45:30.161 "ddgst": ${ddgst:-false} 00:45:30.161 }, 00:45:30.161 "method": "bdev_nvme_attach_controller" 00:45:30.161 } 00:45:30.161 EOF 00:45:30.161 )") 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:30.161 "params": { 00:45:30.161 "name": "Nvme0", 00:45:30.161 "trtype": "tcp", 00:45:30.161 "traddr": "10.0.0.2", 00:45:30.161 "adrfam": "ipv4", 00:45:30.161 "trsvcid": "4420", 00:45:30.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.161 "hdgst": true, 00:45:30.161 "ddgst": true 00:45:30.161 }, 00:45:30.161 "method": "bdev_nvme_attach_controller" 00:45:30.161 }' 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:30.161 11:25:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.421 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:30.421 ... 00:45:30.421 fio-3.35 00:45:30.421 Starting 3 threads 00:45:42.738 00:45:42.738 filename0: (groupid=0, jobs=1): err= 0: pid=2255450: Wed Oct 9 11:26:00 2024 00:45:42.738 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(288MiB/10049msec) 00:45:42.738 slat (nsec): min=5686, max=31273, avg=8783.73, stdev=1346.99 00:45:42.738 clat (usec): min=6631, max=57135, avg=13053.41, stdev=2975.87 00:45:42.738 lat (usec): min=6640, max=57144, avg=13062.19, stdev=2975.92 00:45:42.738 clat percentiles (usec): 00:45:42.738 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[11600], 00:45:42.738 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:45:42.738 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:45:42.738 | 99.00th=[15926], 99.50th=[16319], 99.90th=[55837], 99.95th=[56886], 00:45:42.738 | 99.99th=[56886] 00:45:42.738 bw ( KiB/s): min=27136, max=32256, per=35.17%, avg=29465.60, stdev=1497.11, samples=20 00:45:42.738 iops : min= 212, max= 252, avg=230.20, stdev=11.70, samples=20 00:45:42.738 lat (msec) : 10=11.20%, 20=88.45%, 50=0.04%, 100=0.30% 00:45:42.738 cpu : usr=95.21%, sys=4.54%, ctx=34, majf=0, minf=138 00:45:42.738 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.738 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:42.738 filename0: (groupid=0, jobs=1): err= 0: pid=2255451: Wed Oct 9 11:26:00 2024 00:45:42.738 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(280MiB/10047msec) 00:45:42.738 slat (nsec): min=5585, max=32931, avg=6553.58, stdev=844.12 00:45:42.738 clat (usec): min=7507, max=56121, avg=13433.35, stdev=3669.41 00:45:42.738 lat (usec): min=7514, max=56154, avg=13439.90, stdev=3669.64 00:45:42.738 clat percentiles (usec): 00:45:42.738 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11863], 00:45:42.738 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:45:42.738 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:45:42.738 | 99.00th=[16712], 99.50th=[52167], 99.90th=[55837], 99.95th=[56361], 00:45:42.738 | 99.99th=[56361] 00:45:42.738 bw ( KiB/s): min=24576, max=30976, per=34.17%, avg=28633.60, stdev=1812.62, samples=20 00:45:42.738 iops : min= 192, max= 242, avg=223.70, stdev=14.16, samples=20 00:45:42.738 lat (msec) : 10=8.53%, 20=90.84%, 50=0.04%, 100=0.58% 00:45:42.738 cpu : usr=95.06%, sys=4.71%, ctx=26, majf=0, minf=120 00:45:42.738 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.738 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:42.738 filename0: (groupid=0, jobs=1): err= 0: pid=2255452: Wed Oct 9 11:26:00 2024 00:45:42.738 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(254MiB/10045msec) 00:45:42.738 slat (nsec): min=5607, max=31915, avg=6531.79, stdev=987.41 00:45:42.738 clat (usec): min=7121, max=96203, avg=14778.69, stdev=9445.30 00:45:42.738 lat (usec): min=7127, max=96210, avg=14785.22, stdev=9445.27 00:45:42.738 clat percentiles (usec): 00:45:42.738 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:45:42.738 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:45:42.738 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15664], 00:45:42.738 | 99.00th=[54789], 99.50th=[55313], 99.90th=[93848], 99.95th=[94897], 00:45:42.738 | 99.99th=[95945] 00:45:42.738 bw ( KiB/s): min=19456, max=32000, per=31.06%, avg=26022.40, stdev=3425.83, samples=20 00:45:42.738 iops : min= 152, max= 250, avg=203.30, stdev=26.76, samples=20 00:45:42.738 lat (msec) : 10=2.75%, 20=92.78%, 50=0.10%, 100=4.37% 00:45:42.738 cpu : usr=95.32%, sys=4.45%, ctx=13, majf=0, minf=150 00:45:42.738 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:42.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:42.738 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:42.738 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:42.738 00:45:42.738 Run status group 0 (all jobs): 00:45:42.738 READ: bw=81.8MiB/s (85.8MB/s), 25.3MiB/s-28.7MiB/s (26.6MB/s-30.1MB/s), io=822MiB (862MB), run=10045-10049msec 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:42.738 11:26:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:42.738 00:45:42.738 real 0m11.156s 00:45:42.738 user 0m40.385s 00:45:42.738 sys 0m1.682s 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:42.738 11:26:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:42.738 ************************************ 00:45:42.738 END TEST fio_dif_digest 00:45:42.738 ************************************ 00:45:42.738 11:26:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:42.738 11:26:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:42.738 11:26:01 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:42.738 11:26:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:42.738 11:26:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:42.739 rmmod nvme_tcp 00:45:42.739 rmmod nvme_fabrics 00:45:42.739 rmmod nvme_keyring 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2245071 ']' 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2245071 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2245071 ']' 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2245071 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2245071 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2245071' 00:45:42.739 killing process with pid 2245071 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2245071 00:45:42.739 11:26:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2245071 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:45:42.739 11:26:01 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:44.651 Waiting for block devices as requested 00:45:44.651 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:44.651 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:44.651 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:44.651 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:44.911 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:44.911 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:44.911 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:45.171 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:45.171 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:45.171 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:45.432 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:45.432 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:45.432 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:45.701 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:45.701 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:45.701 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:45.701 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:45.961 11:26:05 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:45.962 11:26:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:45.962 11:26:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.507 11:26:08 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:48.507 00:45:48.507 real 1m17.676s 00:45:48.507 user 7m49.804s 00:45:48.507 sys 0m20.603s 00:45:48.507 11:26:08 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:48.507 11:26:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:48.507 ************************************ 00:45:48.507 END TEST nvmf_dif 00:45:48.507 ************************************ 00:45:48.507 11:26:08 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:48.507 11:26:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:45:48.507 11:26:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:48.507 11:26:08 -- common/autotest_common.sh@10 -- # set +x 00:45:48.507 ************************************ 00:45:48.507 START TEST nvmf_abort_qd_sizes 00:45:48.507 ************************************ 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:48.507 * Looking for test storage... 00:45:48.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:48.507 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:48.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.508 --rc genhtml_branch_coverage=1 00:45:48.508 --rc genhtml_function_coverage=1 00:45:48.508 --rc genhtml_legend=1 00:45:48.508 --rc geninfo_all_blocks=1 00:45:48.508 --rc geninfo_unexecuted_blocks=1 00:45:48.508 00:45:48.508 ' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:48.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.508 --rc genhtml_branch_coverage=1 00:45:48.508 --rc genhtml_function_coverage=1 00:45:48.508 --rc genhtml_legend=1 00:45:48.508 --rc geninfo_all_blocks=1 00:45:48.508 --rc geninfo_unexecuted_blocks=1 00:45:48.508 00:45:48.508 ' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:48.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.508 --rc genhtml_branch_coverage=1 00:45:48.508 --rc genhtml_function_coverage=1 00:45:48.508 --rc genhtml_legend=1 00:45:48.508 --rc geninfo_all_blocks=1 00:45:48.508 --rc geninfo_unexecuted_blocks=1 00:45:48.508 00:45:48.508 ' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:48.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.508 --rc genhtml_branch_coverage=1 00:45:48.508 --rc genhtml_function_coverage=1 00:45:48.508 --rc genhtml_legend=1 00:45:48.508 --rc geninfo_all_blocks=1 00:45:48.508 --rc geninfo_unexecuted_blocks=1 00:45:48.508 00:45:48.508 ' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:48.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:48.508 11:26:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:56.649 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:56.649 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:56.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:56.650 Found net devices under 0000:31:00.0: cvl_0_0 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:56.650 Found net devices under 0000:31:00.1: cvl_0_1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:56.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:56.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:45:56.650 00:45:56.650 --- 10.0.0.2 ping statistics --- 00:45:56.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:56.650 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:56.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:56.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:45:56.650 00:45:56.650 --- 10.0.0.1 ping statistics --- 00:45:56.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:56.650 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:45:56.650 11:26:15 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:45:59.193 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:45:59.193 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2264879 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2264879 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2264879 ']' 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:59.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:59.454 11:26:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:59.715 [2024-10-09 11:26:19.471244] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:45:59.715 [2024-10-09 11:26:19.471309] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:59.715 [2024-10-09 11:26:19.613062] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:45:59.715 [2024-10-09 11:26:19.645535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:59.715 [2024-10-09 11:26:19.669835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:59.715 [2024-10-09 11:26:19.669879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:59.715 [2024-10-09 11:26:19.669887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:59.715 [2024-10-09 11:26:19.669895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:59.715 [2024-10-09 11:26:19.669901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:59.715 [2024-10-09 11:26:19.671958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:59.715 [2024-10-09 11:26:19.672078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:59.715 [2024-10-09 11:26:19.672236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:59.715 [2024-10-09 11:26:19.672236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:00.285 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:00.285 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:46:00.285 11:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:00.285 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:00.285 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:00.545 11:26:20 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:00.546 11:26:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:00.546 ************************************ 00:46:00.546 START TEST spdk_target_abort 00:46:00.546 ************************************ 00:46:00.546 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:46:00.546 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:00.546 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:00.546 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.546 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:00.806 spdk_targetn1 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:00.806 [2024-10-09 11:26:20.671258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:00.806 [2024-10-09 11:26:20.711457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:00.806 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:00.807 11:26:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:01.377 [2024-10-09 11:26:21.112263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:544 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.112289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0046 p:1 m:0 dnr:0 00:46:01.377 [2024-10-09 11:26:21.143984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1640 len:8 PRP1 0x200004abe000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.144002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:46:01.377 [2024-10-09 11:26:21.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3032 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.183963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:46:01.377 [2024-10-09 11:26:21.193626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3440 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.193642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00af p:0 m:0 dnr:0 00:46:01.377 [2024-10-09 11:26:21.194229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3464 len:8 PRP1 0x200004abe000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.194240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:46:01.377 [2024-10-09 11:26:21.207980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3872 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:01.377 [2024-10-09 11:26:21.207996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00e6 p:0 m:0 dnr:0 00:46:04.675 Initializing NVMe Controllers 00:46:04.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:04.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:04.675 Initialization complete. Launching workers. 00:46:04.675 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11957, failed: 6 00:46:04.675 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2883, failed to submit 9080 00:46:04.675 success 747, unsuccessful 2136, failed 0 00:46:04.675 11:26:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:04.675 11:26:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:04.675 [2024-10-09 11:26:24.336388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e50000 PRP2 0x0 00:46:04.675 [2024-10-09 11:26:24.336427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:46:04.675 [2024-10-09 11:26:24.391755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:1608 len:8 PRP1 0x200004e50000 PRP2 0x0 00:46:04.675 [2024-10-09 11:26:24.391781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:46:04.675 [2024-10-09 11:26:24.407443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1952 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:46:04.675 [2024-10-09 11:26:24.407473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:46:04.675 [2024-10-09 11:26:24.415578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2136 len:8 PRP1 0x200004e50000 PRP2 0x0 00:46:04.675 [2024-10-09 11:26:24.415600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:46:04.675 [2024-10-09 11:26:24.463590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3256 len:8 PRP1 0x200004e54000 PRP2 0x0 00:46:04.675 [2024-10-09 11:26:24.463613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0099 p:0 m:0 dnr:0 00:46:04.936 [2024-10-09 11:26:24.896520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:12320 len:8 PRP1 0x200004e50000 PRP2 0x0 00:46:04.936 [2024-10-09 11:26:24.896548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:46:04.936 [2024-10-09 11:26:24.896855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:12328 len:8 PRP1 0x200004e46000 PRP2 0x0 00:46:04.936 [2024-10-09 11:26:24.896864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:46:07.477 [2024-10-09 11:26:27.279645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:65840 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:46:07.477 [2024-10-09 11:26:27.279684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0027 p:1 m:0 dnr:0 00:46:07.737 Initializing NVMe Controllers 00:46:07.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:07.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:07.737 Initialization complete. Launching workers. 00:46:07.737 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8358, failed: 8 00:46:07.737 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7116 00:46:07.737 success 317, unsuccessful 933, failed 0 00:46:07.737 11:26:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:07.737 11:26:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:09.648 [2024-10-09 11:26:29.123285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:175 nsid:1 lba:150600 len:8 PRP1 0x200004b2c000 PRP2 0x0 00:46:09.648 [2024-10-09 11:26:29.123313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:175 cdw0:0 sqhd:0046 p:1 m:0 dnr:0 00:46:11.031 Initializing NVMe Controllers 00:46:11.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:11.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:11.031 Initialization complete. Launching workers. 00:46:11.031 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41690, failed: 1 00:46:11.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2687, failed to submit 39004 00:46:11.031 success 592, unsuccessful 2095, failed 0 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:11.031 11:26:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2264879 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2264879 ']' 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2264879 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2264879 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2264879' 00:46:12.945 killing process with pid 2264879 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2264879 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2264879 00:46:12.945 00:46:12.945 real 0m12.448s 00:46:12.945 user 0m50.570s 00:46:12.945 sys 0m1.850s 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:12.945 ************************************ 00:46:12.945 END TEST spdk_target_abort 00:46:12.945 ************************************ 00:46:12.945 11:26:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:12.945 11:26:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:12.945 11:26:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:12.945 11:26:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:12.945 ************************************ 00:46:12.945 START TEST kernel_target_abort 00:46:12.945 ************************************ 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:46:12.945 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:46:12.946 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:46:12.946 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:12.946 11:26:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:16.249 Waiting for block devices as requested 00:46:16.509 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:16.509 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:16.509 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:16.769 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:16.769 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:16.769 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:17.030 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:17.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:17.030 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:17.290 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:17.290 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:17.290 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:17.551 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:17.551 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:17.551 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:17.551 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:17.819 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:18.079 No valid GPT data, bailing 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:46:18.079 11:26:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:18.079 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:46:18.340 00:46:18.340 Discovery Log Number of Records 2, Generation counter 2 00:46:18.340 =====Discovery Log Entry 0====== 00:46:18.340 trtype: tcp 00:46:18.340 adrfam: ipv4 00:46:18.340 subtype: current discovery subsystem 00:46:18.340 treq: not specified, sq flow control disable supported 00:46:18.340 portid: 1 00:46:18.340 trsvcid: 4420 00:46:18.340 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:18.340 traddr: 10.0.0.1 00:46:18.340 eflags: none 00:46:18.340 sectype: none 00:46:18.340 =====Discovery Log Entry 1====== 00:46:18.340 trtype: tcp 00:46:18.340 adrfam: ipv4 00:46:18.340 subtype: nvme subsystem 00:46:18.340 treq: not specified, sq flow control disable supported 00:46:18.340 portid: 1 00:46:18.340 trsvcid: 4420 00:46:18.340 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:18.340 traddr: 10.0.0.1 00:46:18.340 eflags: none 00:46:18.340 sectype: none 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:18.340 11:26:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:21.639 Initializing NVMe Controllers 00:46:21.639 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:21.639 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:21.639 Initialization complete. Launching workers. 00:46:21.639 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67496, failed: 0 00:46:21.639 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67496, failed to submit 0 00:46:21.639 success 0, unsuccessful 67496, failed 0 00:46:21.639 11:26:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:21.640 11:26:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.940 Initializing NVMe Controllers 00:46:24.940 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:24.940 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:24.940 Initialization complete. Launching workers. 00:46:24.940 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108511, failed: 0 00:46:24.940 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27354, failed to submit 81157 00:46:24.940 success 0, unsuccessful 27354, failed 0 00:46:24.940 11:26:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:24.940 11:26:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:28.237 Initializing NVMe Controllers 00:46:28.238 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:28.238 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:28.238 Initialization complete. Launching workers. 00:46:28.238 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101610, failed: 0 00:46:28.238 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25398, failed to submit 76212 00:46:28.238 success 0, unsuccessful 25398, failed 0 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:46:28.238 11:26:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:30.779 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:30.779 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:30.779 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:30.779 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:30.779 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:31.039 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:32.950 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:33.210 00:46:33.210 real 0m20.118s 00:46:33.210 user 0m9.802s 00:46:33.210 sys 0m5.724s 00:46:33.210 11:26:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:33.210 11:26:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:33.210 ************************************ 00:46:33.210 END TEST kernel_target_abort 00:46:33.210 ************************************ 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:33.210 rmmod nvme_tcp 00:46:33.210 rmmod nvme_fabrics 00:46:33.210 rmmod nvme_keyring 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2264879 ']' 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2264879 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2264879 ']' 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2264879 00:46:33.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2264879) - No such process 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2264879 is not found' 00:46:33.210 Process with pid 2264879 is not found 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:46:33.210 11:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:36.509 Waiting for block devices as requested 00:46:36.509 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:36.770 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:36.770 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:36.770 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:36.770 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:37.030 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:37.030 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:37.030 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:37.290 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:37.290 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:37.550 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:37.550 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:37.550 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:37.550 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:37.810 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:37.810 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:37.810 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:38.070 11:26:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:40.615 11:27:00 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:40.615 00:46:40.615 real 0m51.993s 00:46:40.615 user 1m5.655s 00:46:40.615 sys 0m18.365s 00:46:40.615 11:27:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:40.615 11:27:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:40.615 ************************************ 00:46:40.615 END TEST nvmf_abort_qd_sizes 00:46:40.615 ************************************ 00:46:40.615 11:27:00 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:40.615 11:27:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:46:40.615 11:27:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:40.615 11:27:00 -- common/autotest_common.sh@10 -- # set +x 00:46:40.615 ************************************ 00:46:40.615 START TEST keyring_file 00:46:40.615 ************************************ 00:46:40.615 11:27:00 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:40.615 * Looking for test storage... 00:46:40.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:40.615 11:27:00 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:40.615 11:27:00 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:46:40.615 11:27:00 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:40.615 11:27:00 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:40.615 11:27:00 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:40.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.616 --rc genhtml_branch_coverage=1 00:46:40.616 --rc genhtml_function_coverage=1 00:46:40.616 --rc genhtml_legend=1 00:46:40.616 --rc geninfo_all_blocks=1 00:46:40.616 --rc geninfo_unexecuted_blocks=1 00:46:40.616 00:46:40.616 ' 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:40.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.616 --rc genhtml_branch_coverage=1 00:46:40.616 --rc genhtml_function_coverage=1 00:46:40.616 --rc genhtml_legend=1 00:46:40.616 --rc geninfo_all_blocks=1 00:46:40.616 --rc geninfo_unexecuted_blocks=1 00:46:40.616 00:46:40.616 ' 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:40.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.616 --rc genhtml_branch_coverage=1 00:46:40.616 --rc genhtml_function_coverage=1 00:46:40.616 --rc genhtml_legend=1 00:46:40.616 --rc geninfo_all_blocks=1 00:46:40.616 --rc geninfo_unexecuted_blocks=1 00:46:40.616 00:46:40.616 ' 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:40.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:40.616 --rc genhtml_branch_coverage=1 00:46:40.616 --rc genhtml_function_coverage=1 00:46:40.616 --rc genhtml_legend=1 00:46:40.616 --rc geninfo_all_blocks=1 00:46:40.616 --rc geninfo_unexecuted_blocks=1 00:46:40.616 00:46:40.616 ' 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:40.616 11:27:00 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:40.616 11:27:00 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.616 11:27:00 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.616 11:27:00 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.616 11:27:00 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:40.616 11:27:00 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:40.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FtKiIWJ1gf 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FtKiIWJ1gf 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FtKiIWJ1gf 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FtKiIWJ1gf 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Uj6xVMgMzR 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:40.616 11:27:00 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Uj6xVMgMzR 00:46:40.616 11:27:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Uj6xVMgMzR 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Uj6xVMgMzR 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@30 -- # tgtpid=2275251 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2275251 00:46:40.616 11:27:00 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2275251 ']' 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:40.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:40.616 11:27:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:40.616 [2024-10-09 11:27:00.573615] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:46:40.617 [2024-10-09 11:27:00.573674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275251 ] 00:46:40.877 [2024-10-09 11:27:00.703768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:40.877 [2024-10-09 11:27:00.735206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:40.877 [2024-10-09 11:27:00.753474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:41.447 11:27:01 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:41.447 [2024-10-09 11:27:01.360485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:41.447 null0 00:46:41.447 [2024-10-09 11:27:01.392454] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:41.447 [2024-10-09 11:27:01.392844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:41.447 11:27:01 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:41.447 [2024-10-09 11:27:01.424453] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:41.447 request: 00:46:41.447 { 00:46:41.447 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:41.447 "secure_channel": false, 00:46:41.447 "listen_address": { 00:46:41.447 "trtype": "tcp", 00:46:41.447 "traddr": "127.0.0.1", 00:46:41.447 "trsvcid": "4420" 00:46:41.447 }, 00:46:41.447 "method": "nvmf_subsystem_add_listener", 00:46:41.447 "req_id": 1 00:46:41.447 } 00:46:41.447 Got JSON-RPC error response 00:46:41.447 response: 00:46:41.447 { 00:46:41.447 "code": -32602, 00:46:41.447 "message": "Invalid parameters" 00:46:41.447 } 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:41.447 11:27:01 keyring_file -- keyring/file.sh@47 -- # bperfpid=2275391 00:46:41.447 11:27:01 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2275391 /var/tmp/bperf.sock 00:46:41.447 11:27:01 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2275391 ']' 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:41.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:41.447 11:27:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:41.708 [2024-10-09 11:27:01.483198] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:46:41.708 [2024-10-09 11:27:01.483247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275391 ] 00:46:41.708 [2024-10-09 11:27:01.613091] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:41.708 [2024-10-09 11:27:01.662743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.708 [2024-10-09 11:27:01.680906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:42.280 11:27:02 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:42.280 11:27:02 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:42.280 11:27:02 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:42.280 11:27:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:42.539 11:27:02 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Uj6xVMgMzR 00:46:42.540 11:27:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Uj6xVMgMzR 00:46:42.800 11:27:02 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:42.800 11:27:02 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:42.800 11:27:02 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FtKiIWJ1gf == \/\t\m\p\/\t\m\p\.\F\t\K\i\I\W\J\1\g\f ]] 00:46:42.800 11:27:02 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:42.800 11:27:02 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:42.800 11:27:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.063 11:27:02 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Uj6xVMgMzR == \/\t\m\p\/\t\m\p\.\U\j\6\x\V\M\g\M\z\R ]] 00:46:43.063 11:27:02 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:43.063 11:27:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.063 11:27:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.063 11:27:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.063 11:27:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.063 11:27:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.324 11:27:03 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:43.324 11:27:03 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.324 11:27:03 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:43.324 11:27:03 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:43.324 11:27:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:43.621 [2024-10-09 11:27:03.452700] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:43.621 nvme0n1 00:46:43.621 11:27:03 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:43.621 11:27:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.621 11:27:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.621 11:27:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.621 11:27:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.621 11:27:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.942 11:27:03 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:43.942 11:27:03 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:43.942 11:27:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:43.942 11:27:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.942 11:27:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.942 11:27:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:43.942 11:27:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.942 11:27:03 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:43.942 11:27:03 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:44.240 Running I/O for 1 seconds... 00:46:45.181 16344.00 IOPS, 63.84 MiB/s 00:46:45.181 Latency(us) 00:46:45.181 [2024-10-09T09:27:05.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:45.181 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:45.181 nvme0n1 : 1.01 16354.59 63.89 0.00 0.00 7796.46 3476.06 13302.08 00:46:45.181 [2024-10-09T09:27:05.183Z] =================================================================================================================== 00:46:45.181 [2024-10-09T09:27:05.183Z] Total : 16354.59 63.89 0.00 0.00 7796.46 3476.06 13302.08 00:46:45.181 { 00:46:45.181 "results": [ 00:46:45.181 { 00:46:45.181 "job": "nvme0n1", 00:46:45.181 "core_mask": "0x2", 00:46:45.181 "workload": "randrw", 00:46:45.181 "percentage": 50, 00:46:45.181 "status": "finished", 00:46:45.181 "queue_depth": 128, 00:46:45.181 "io_size": 4096, 00:46:45.181 "runtime": 1.00724, 00:46:45.181 "iops": 16354.592748500854, 00:46:45.181 "mibps": 63.88512792383146, 00:46:45.181 "io_failed": 0, 00:46:45.181 "io_timeout": 0, 00:46:45.181 "avg_latency_us": 7796.461678962804, 00:46:45.181 "min_latency_us": 3476.0574674239892, 00:46:45.181 "max_latency_us": 13302.07818242566 00:46:45.181 } 00:46:45.181 ], 00:46:45.181 "core_count": 1 00:46:45.181 } 00:46:45.181 11:27:05 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:45.181 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:45.441 11:27:05 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:45.441 11:27:05 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:45.441 11:27:05 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:45.441 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:45.701 11:27:05 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:45.701 11:27:05 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:45.701 11:27:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:45.701 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:45.963 [2024-10-09 11:27:05.715498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:45.963 [2024-10-09 11:27:05.716132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6baf30 (107): Transport endpoint is not connected 00:46:45.963 [2024-10-09 11:27:05.717127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6baf30 (9): Bad file descriptor 00:46:45.963 [2024-10-09 11:27:05.718126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:45.963 [2024-10-09 11:27:05.718141] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:45.963 [2024-10-09 11:27:05.718147] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:45.963 [2024-10-09 11:27:05.718153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:45.963 request: 00:46:45.963 { 00:46:45.963 "name": "nvme0", 00:46:45.963 "trtype": "tcp", 00:46:45.963 "traddr": "127.0.0.1", 00:46:45.963 "adrfam": "ipv4", 00:46:45.963 "trsvcid": "4420", 00:46:45.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:45.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:45.963 "prchk_reftag": false, 00:46:45.963 "prchk_guard": false, 00:46:45.963 "hdgst": false, 00:46:45.963 "ddgst": false, 00:46:45.963 "psk": "key1", 00:46:45.963 "allow_unrecognized_csi": false, 00:46:45.963 "method": "bdev_nvme_attach_controller", 00:46:45.963 "req_id": 1 00:46:45.963 } 00:46:45.963 Got JSON-RPC error response 00:46:45.963 response: 00:46:45.963 { 00:46:45.963 "code": -5, 00:46:45.963 "message": "Input/output error" 00:46:45.963 } 00:46:45.963 11:27:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:45.963 11:27:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:45.963 11:27:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:45.963 11:27:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:45.963 11:27:05 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:45.963 11:27:05 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:45.963 11:27:05 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:45.963 11:27:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.224 11:27:06 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:46.224 11:27:06 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:46.224 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:46.484 11:27:06 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:46.484 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:46.484 11:27:06 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:46.484 11:27:06 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:46.484 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.745 11:27:06 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:46.745 11:27:06 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FtKiIWJ1gf 00:46:46.745 11:27:06 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:46.745 11:27:06 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:46.745 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:46.745 [2024-10-09 11:27:06.735407] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FtKiIWJ1gf': 0100660 00:46:46.745 [2024-10-09 11:27:06.735428] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:46.745 request: 00:46:46.745 { 00:46:46.745 "name": "key0", 00:46:46.745 "path": "/tmp/tmp.FtKiIWJ1gf", 00:46:46.745 "method": "keyring_file_add_key", 00:46:46.745 "req_id": 1 00:46:46.745 } 00:46:46.745 Got JSON-RPC error response 00:46:46.745 response: 00:46:46.745 { 00:46:46.745 "code": -1, 00:46:46.745 "message": "Operation not permitted" 00:46:46.745 } 00:46:47.006 11:27:06 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:47.006 11:27:06 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:47.006 11:27:06 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:47.006 11:27:06 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:47.006 11:27:06 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FtKiIWJ1gf 00:46:47.006 11:27:06 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FtKiIWJ1gf 00:46:47.006 11:27:06 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FtKiIWJ1gf 00:46:47.006 11:27:06 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:47.006 11:27:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.267 11:27:07 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:47.267 11:27:07 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:47.267 11:27:07 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:47.267 11:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:47.267 [2024-10-09 11:27:07.259520] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FtKiIWJ1gf': No such file or directory 00:46:47.267 [2024-10-09 11:27:07.259535] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:47.267 [2024-10-09 11:27:07.259548] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:47.267 [2024-10-09 11:27:07.259554] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:47.267 [2024-10-09 11:27:07.259560] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:47.267 [2024-10-09 11:27:07.259569] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:47.267 request: 00:46:47.267 { 00:46:47.267 "name": "nvme0", 00:46:47.267 "trtype": "tcp", 00:46:47.267 "traddr": "127.0.0.1", 00:46:47.267 "adrfam": "ipv4", 00:46:47.267 "trsvcid": "4420", 00:46:47.267 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:47.267 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:47.267 "prchk_reftag": false, 00:46:47.267 "prchk_guard": false, 00:46:47.267 "hdgst": false, 00:46:47.267 "ddgst": false, 00:46:47.267 "psk": "key0", 00:46:47.267 "allow_unrecognized_csi": false, 00:46:47.267 "method": "bdev_nvme_attach_controller", 00:46:47.267 "req_id": 1 00:46:47.267 } 00:46:47.267 Got JSON-RPC error response 00:46:47.267 response: 00:46:47.267 { 00:46:47.267 "code": -19, 00:46:47.267 "message": "No such device" 00:46:47.267 } 00:46:47.528 11:27:07 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:46:47.528 11:27:07 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:47.528 11:27:07 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:47.528 11:27:07 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:47.528 11:27:07 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:47.528 11:27:07 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Fby16AlfVh 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:46:47.528 11:27:07 keyring_file -- nvmf/common.sh@731 -- # python - 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Fby16AlfVh 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Fby16AlfVh 00:46:47.528 11:27:07 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Fby16AlfVh 00:46:47.528 11:27:07 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fby16AlfVh 00:46:47.528 11:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fby16AlfVh 00:46:47.788 11:27:07 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:47.788 11:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.048 nvme0n1 00:46:48.048 11:27:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:48.048 11:27:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.048 11:27:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.048 11:27:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.048 11:27:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.048 11:27:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.309 11:27:08 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:48.309 11:27:08 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:48.309 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:48.309 11:27:08 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:48.309 11:27:08 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:48.309 11:27:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.309 11:27:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.309 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.570 11:27:08 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:48.570 11:27:08 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:48.570 11:27:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.570 11:27:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.570 11:27:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.570 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.570 11:27:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.831 11:27:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:48.831 11:27:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:48.831 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:48.831 11:27:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:48.831 11:27:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:48.831 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.091 11:27:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:49.091 11:27:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Fby16AlfVh 00:46:49.091 11:27:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Fby16AlfVh 00:46:49.353 11:27:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Uj6xVMgMzR 00:46:49.353 11:27:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Uj6xVMgMzR 00:46:49.353 11:27:09 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:49.353 11:27:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:49.614 nvme0n1 00:46:49.614 11:27:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:49.614 11:27:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:49.874 11:27:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:49.874 "subsystems": [ 00:46:49.874 { 00:46:49.874 "subsystem": "keyring", 00:46:49.874 "config": [ 00:46:49.874 { 00:46:49.874 "method": "keyring_file_add_key", 00:46:49.874 "params": { 00:46:49.874 "name": "key0", 00:46:49.874 "path": "/tmp/tmp.Fby16AlfVh" 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "keyring_file_add_key", 00:46:49.874 "params": { 00:46:49.874 "name": "key1", 00:46:49.874 "path": "/tmp/tmp.Uj6xVMgMzR" 00:46:49.874 } 00:46:49.874 } 00:46:49.874 ] 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "subsystem": "iobuf", 00:46:49.874 "config": [ 00:46:49.874 { 00:46:49.874 "method": "iobuf_set_options", 00:46:49.874 "params": { 00:46:49.874 "small_pool_count": 8192, 00:46:49.874 "large_pool_count": 1024, 00:46:49.874 "small_bufsize": 8192, 00:46:49.874 "large_bufsize": 135168 00:46:49.874 } 00:46:49.874 } 00:46:49.874 ] 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "subsystem": "sock", 00:46:49.874 "config": [ 00:46:49.874 { 00:46:49.874 "method": "sock_set_default_impl", 00:46:49.874 "params": { 00:46:49.874 "impl_name": "posix" 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "sock_impl_set_options", 00:46:49.874 "params": { 00:46:49.874 "impl_name": "ssl", 00:46:49.874 "recv_buf_size": 4096, 00:46:49.874 "send_buf_size": 4096, 00:46:49.874 "enable_recv_pipe": true, 00:46:49.874 "enable_quickack": false, 00:46:49.874 "enable_placement_id": 0, 00:46:49.874 "enable_zerocopy_send_server": true, 00:46:49.874 "enable_zerocopy_send_client": false, 00:46:49.874 "zerocopy_threshold": 0, 00:46:49.874 "tls_version": 0, 00:46:49.874 "enable_ktls": false 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "sock_impl_set_options", 00:46:49.874 "params": { 00:46:49.874 "impl_name": "posix", 00:46:49.874 "recv_buf_size": 2097152, 00:46:49.874 "send_buf_size": 2097152, 00:46:49.874 "enable_recv_pipe": true, 00:46:49.874 "enable_quickack": false, 00:46:49.874 "enable_placement_id": 0, 00:46:49.874 "enable_zerocopy_send_server": true, 00:46:49.874 "enable_zerocopy_send_client": false, 00:46:49.874 "zerocopy_threshold": 0, 00:46:49.874 "tls_version": 0, 00:46:49.874 "enable_ktls": false 00:46:49.874 } 00:46:49.874 } 00:46:49.874 ] 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "subsystem": "vmd", 00:46:49.874 "config": [] 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "subsystem": "accel", 00:46:49.874 "config": [ 00:46:49.874 { 00:46:49.874 "method": "accel_set_options", 00:46:49.874 "params": { 00:46:49.874 "small_cache_size": 128, 00:46:49.874 "large_cache_size": 16, 00:46:49.874 "task_count": 2048, 00:46:49.874 "sequence_count": 2048, 00:46:49.874 "buf_count": 2048 00:46:49.874 } 00:46:49.874 } 00:46:49.874 ] 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "subsystem": "bdev", 00:46:49.874 "config": [ 00:46:49.874 { 00:46:49.874 "method": "bdev_set_options", 00:46:49.874 "params": { 00:46:49.874 "bdev_io_pool_size": 65535, 00:46:49.874 "bdev_io_cache_size": 256, 00:46:49.874 "bdev_auto_examine": true, 00:46:49.874 "iobuf_small_cache_size": 128, 00:46:49.874 "iobuf_large_cache_size": 16 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "bdev_raid_set_options", 00:46:49.874 "params": { 00:46:49.874 "process_window_size_kb": 1024, 00:46:49.874 "process_max_bandwidth_mb_sec": 0 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "bdev_iscsi_set_options", 00:46:49.874 "params": { 00:46:49.874 "timeout_sec": 30 00:46:49.874 } 00:46:49.874 }, 00:46:49.874 { 00:46:49.874 "method": "bdev_nvme_set_options", 00:46:49.874 "params": { 00:46:49.874 "action_on_timeout": "none", 00:46:49.874 "timeout_us": 0, 00:46:49.874 "timeout_admin_us": 0, 00:46:49.874 "keep_alive_timeout_ms": 10000, 00:46:49.874 "arbitration_burst": 0, 00:46:49.874 "low_priority_weight": 0, 00:46:49.874 "medium_priority_weight": 0, 00:46:49.874 "high_priority_weight": 0, 00:46:49.874 "nvme_adminq_poll_period_us": 10000, 00:46:49.874 "nvme_ioq_poll_period_us": 0, 00:46:49.875 "io_queue_requests": 512, 00:46:49.875 "delay_cmd_submit": true, 00:46:49.875 "transport_retry_count": 4, 00:46:49.875 "bdev_retry_count": 3, 00:46:49.875 "transport_ack_timeout": 0, 00:46:49.875 "ctrlr_loss_timeout_sec": 0, 00:46:49.875 "reconnect_delay_sec": 0, 00:46:49.875 "fast_io_fail_timeout_sec": 0, 00:46:49.875 "disable_auto_failback": false, 00:46:49.875 "generate_uuids": false, 00:46:49.875 "transport_tos": 0, 00:46:49.875 "nvme_error_stat": false, 00:46:49.875 "rdma_srq_size": 0, 00:46:49.875 "io_path_stat": false, 00:46:49.875 "allow_accel_sequence": false, 00:46:49.875 "rdma_max_cq_size": 0, 00:46:49.875 "rdma_cm_event_timeout_ms": 0, 00:46:49.875 "dhchap_digests": [ 00:46:49.875 "sha256", 00:46:49.875 "sha384", 00:46:49.875 "sha512" 00:46:49.875 ], 00:46:49.875 "dhchap_dhgroups": [ 00:46:49.875 "null", 00:46:49.875 "ffdhe2048", 00:46:49.875 "ffdhe3072", 00:46:49.875 "ffdhe4096", 00:46:49.875 "ffdhe6144", 00:46:49.875 "ffdhe8192" 00:46:49.875 ] 00:46:49.875 } 00:46:49.875 }, 00:46:49.875 { 00:46:49.875 "method": "bdev_nvme_attach_controller", 00:46:49.875 "params": { 00:46:49.875 "name": "nvme0", 00:46:49.875 "trtype": "TCP", 00:46:49.875 "adrfam": "IPv4", 00:46:49.875 "traddr": "127.0.0.1", 00:46:49.875 "trsvcid": "4420", 00:46:49.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:49.875 "prchk_reftag": false, 00:46:49.875 "prchk_guard": false, 00:46:49.875 "ctrlr_loss_timeout_sec": 0, 00:46:49.875 "reconnect_delay_sec": 0, 00:46:49.875 "fast_io_fail_timeout_sec": 0, 00:46:49.875 "psk": "key0", 00:46:49.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:49.875 "hdgst": false, 00:46:49.875 "ddgst": false, 00:46:49.875 "multipath": "multipath" 00:46:49.875 } 00:46:49.875 }, 00:46:49.875 { 00:46:49.875 "method": "bdev_nvme_set_hotplug", 00:46:49.875 "params": { 00:46:49.875 "period_us": 100000, 00:46:49.875 "enable": false 00:46:49.875 } 00:46:49.875 }, 00:46:49.875 { 00:46:49.875 "method": "bdev_wait_for_examine" 00:46:49.875 } 00:46:49.875 ] 00:46:49.875 }, 00:46:49.875 { 00:46:49.875 "subsystem": "nbd", 00:46:49.875 "config": [] 00:46:49.875 } 00:46:49.875 ] 00:46:49.875 }' 00:46:49.875 11:27:09 keyring_file -- keyring/file.sh@115 -- # killprocess 2275391 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2275391 ']' 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2275391 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2275391 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2275391' 00:46:49.875 killing process with pid 2275391 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@969 -- # kill 2275391 00:46:49.875 Received shutdown signal, test time was about 1.000000 seconds 00:46:49.875 00:46:49.875 Latency(us) 00:46:49.875 [2024-10-09T09:27:09.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:49.875 [2024-10-09T09:27:09.877Z] =================================================================================================================== 00:46:49.875 [2024-10-09T09:27:09.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:49.875 11:27:09 keyring_file -- common/autotest_common.sh@974 -- # wait 2275391 00:46:50.136 11:27:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=2277599 00:46:50.136 11:27:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2277599 /var/tmp/bperf.sock 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2277599 ']' 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:50.136 11:27:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:50.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:50.136 11:27:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:50.136 11:27:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:50.136 "subsystems": [ 00:46:50.136 { 00:46:50.136 "subsystem": "keyring", 00:46:50.136 "config": [ 00:46:50.136 { 00:46:50.136 "method": "keyring_file_add_key", 00:46:50.136 "params": { 00:46:50.136 "name": "key0", 00:46:50.136 "path": "/tmp/tmp.Fby16AlfVh" 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "keyring_file_add_key", 00:46:50.136 "params": { 00:46:50.136 "name": "key1", 00:46:50.136 "path": "/tmp/tmp.Uj6xVMgMzR" 00:46:50.136 } 00:46:50.136 } 00:46:50.136 ] 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "subsystem": "iobuf", 00:46:50.136 "config": [ 00:46:50.136 { 00:46:50.136 "method": "iobuf_set_options", 00:46:50.136 "params": { 00:46:50.136 "small_pool_count": 8192, 00:46:50.136 "large_pool_count": 1024, 00:46:50.136 "small_bufsize": 8192, 00:46:50.136 "large_bufsize": 135168 00:46:50.136 } 00:46:50.136 } 00:46:50.136 ] 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "subsystem": "sock", 00:46:50.136 "config": [ 00:46:50.136 { 00:46:50.136 "method": "sock_set_default_impl", 00:46:50.136 "params": { 00:46:50.136 "impl_name": "posix" 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "sock_impl_set_options", 00:46:50.136 "params": { 00:46:50.136 "impl_name": "ssl", 00:46:50.136 "recv_buf_size": 4096, 00:46:50.136 "send_buf_size": 4096, 00:46:50.136 "enable_recv_pipe": true, 00:46:50.136 "enable_quickack": false, 00:46:50.136 "enable_placement_id": 0, 00:46:50.136 "enable_zerocopy_send_server": true, 00:46:50.136 "enable_zerocopy_send_client": false, 00:46:50.136 "zerocopy_threshold": 0, 00:46:50.136 "tls_version": 0, 00:46:50.136 "enable_ktls": false 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "sock_impl_set_options", 00:46:50.136 "params": { 00:46:50.136 "impl_name": "posix", 00:46:50.136 "recv_buf_size": 2097152, 00:46:50.136 "send_buf_size": 2097152, 00:46:50.136 "enable_recv_pipe": true, 00:46:50.136 "enable_quickack": false, 00:46:50.136 "enable_placement_id": 0, 00:46:50.136 "enable_zerocopy_send_server": true, 00:46:50.136 "enable_zerocopy_send_client": false, 00:46:50.136 "zerocopy_threshold": 0, 00:46:50.136 "tls_version": 0, 00:46:50.136 "enable_ktls": false 00:46:50.136 } 00:46:50.136 } 00:46:50.136 ] 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "subsystem": "vmd", 00:46:50.136 "config": [] 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "subsystem": "accel", 00:46:50.136 "config": [ 00:46:50.136 { 00:46:50.136 "method": "accel_set_options", 00:46:50.136 "params": { 00:46:50.136 "small_cache_size": 128, 00:46:50.136 "large_cache_size": 16, 00:46:50.136 "task_count": 2048, 00:46:50.136 "sequence_count": 2048, 00:46:50.136 "buf_count": 2048 00:46:50.136 } 00:46:50.136 } 00:46:50.136 ] 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "subsystem": "bdev", 00:46:50.136 "config": [ 00:46:50.136 { 00:46:50.136 "method": "bdev_set_options", 00:46:50.136 "params": { 00:46:50.136 "bdev_io_pool_size": 65535, 00:46:50.136 "bdev_io_cache_size": 256, 00:46:50.136 "bdev_auto_examine": true, 00:46:50.136 "iobuf_small_cache_size": 128, 00:46:50.136 "iobuf_large_cache_size": 16 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "bdev_raid_set_options", 00:46:50.136 "params": { 00:46:50.136 "process_window_size_kb": 1024, 00:46:50.136 "process_max_bandwidth_mb_sec": 0 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "bdev_iscsi_set_options", 00:46:50.136 "params": { 00:46:50.136 "timeout_sec": 30 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "bdev_nvme_set_options", 00:46:50.136 "params": { 00:46:50.136 "action_on_timeout": "none", 00:46:50.136 "timeout_us": 0, 00:46:50.136 "timeout_admin_us": 0, 00:46:50.136 "keep_alive_timeout_ms": 10000, 00:46:50.136 "arbitration_burst": 0, 00:46:50.136 "low_priority_weight": 0, 00:46:50.136 "medium_priority_weight": 0, 00:46:50.136 "high_priority_weight": 0, 00:46:50.136 "nvme_adminq_poll_period_us": 10000, 00:46:50.136 "nvme_ioq_poll_period_us": 0, 00:46:50.136 "io_queue_requests": 512, 00:46:50.136 "delay_cmd_submit": true, 00:46:50.136 "transport_retry_count": 4, 00:46:50.136 "bdev_retry_count": 3, 00:46:50.136 "transport_ack_timeout": 0, 00:46:50.136 "ctrlr_loss_timeout_sec": 0, 00:46:50.136 "reconnect_delay_sec": 0, 00:46:50.136 "fast_io_fail_timeout_sec": 0, 00:46:50.136 "disable_auto_failback": false, 00:46:50.136 "generate_uuids": false, 00:46:50.136 "transport_tos": 0, 00:46:50.136 "nvme_error_stat": false, 00:46:50.136 "rdma_srq_size": 0, 00:46:50.136 "io_path_stat": false, 00:46:50.136 "allow_accel_sequence": false, 00:46:50.136 "rdma_max_cq_size": 0, 00:46:50.136 "rdma_cm_event_timeout_ms": 0, 00:46:50.136 "dhchap_digests": [ 00:46:50.136 "sha256", 00:46:50.136 "sha384", 00:46:50.136 "sha512" 00:46:50.136 ], 00:46:50.136 "dhchap_dhgroups": [ 00:46:50.136 "null", 00:46:50.136 "ffdhe2048", 00:46:50.136 "ffdhe3072", 00:46:50.136 "ffdhe4096", 00:46:50.136 "ffdhe6144", 00:46:50.136 "ffdhe8192" 00:46:50.136 ] 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "bdev_nvme_attach_controller", 00:46:50.136 "params": { 00:46:50.136 "name": "nvme0", 00:46:50.136 "trtype": "TCP", 00:46:50.136 "adrfam": "IPv4", 00:46:50.136 "traddr": "127.0.0.1", 00:46:50.136 "trsvcid": "4420", 00:46:50.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:50.136 "prchk_reftag": false, 00:46:50.136 "prchk_guard": false, 00:46:50.136 "ctrlr_loss_timeout_sec": 0, 00:46:50.136 "reconnect_delay_sec": 0, 00:46:50.136 "fast_io_fail_timeout_sec": 0, 00:46:50.136 "psk": "key0", 00:46:50.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:50.136 "hdgst": false, 00:46:50.136 "ddgst": false, 00:46:50.136 "multipath": "multipath" 00:46:50.136 } 00:46:50.136 }, 00:46:50.136 { 00:46:50.136 "method": "bdev_nvme_set_hotplug", 00:46:50.136 "params": { 00:46:50.136 "period_us": 100000, 00:46:50.137 "enable": false 00:46:50.137 } 00:46:50.137 }, 00:46:50.137 { 00:46:50.137 "method": "bdev_wait_for_examine" 00:46:50.137 } 00:46:50.137 ] 00:46:50.137 }, 00:46:50.137 { 00:46:50.137 "subsystem": "nbd", 00:46:50.137 "config": [] 00:46:50.137 } 00:46:50.137 ] 00:46:50.137 }' 00:46:50.137 [2024-10-09 11:27:09.990623] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:46:50.137 [2024-10-09 11:27:09.990681] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277599 ] 00:46:50.137 [2024-10-09 11:27:10.123335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:50.396 [2024-10-09 11:27:10.169297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:50.396 [2024-10-09 11:27:10.185685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:50.396 [2024-10-09 11:27:10.322863] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:50.965 11:27:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:50.965 11:27:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:46:50.965 11:27:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:50.965 11:27:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.965 11:27:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:50.965 11:27:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:50.965 11:27:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.225 11:27:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:51.225 11:27:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:51.225 11:27:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:51.225 11:27:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:51.225 11:27:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:51.225 11:27:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:51.225 11:27:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:51.485 11:27:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Fby16AlfVh /tmp/tmp.Uj6xVMgMzR 00:46:51.485 11:27:11 keyring_file -- keyring/file.sh@20 -- # killprocess 2277599 00:46:51.485 11:27:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2277599 ']' 00:46:51.485 11:27:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2277599 00:46:51.485 11:27:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:51.485 11:27:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:51.485 11:27:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2277599 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2277599' 00:46:51.745 killing process with pid 2277599 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@969 -- # kill 2277599 00:46:51.745 Received shutdown signal, test time was about 1.000000 seconds 00:46:51.745 00:46:51.745 Latency(us) 00:46:51.745 [2024-10-09T09:27:11.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.745 [2024-10-09T09:27:11.747Z] =================================================================================================================== 00:46:51.745 [2024-10-09T09:27:11.747Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@974 -- # wait 2277599 00:46:51.745 11:27:11 keyring_file -- keyring/file.sh@21 -- # killprocess 2275251 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2275251 ']' 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2275251 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2275251 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2275251' 00:46:51.745 killing process with pid 2275251 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@969 -- # kill 2275251 00:46:51.745 11:27:11 keyring_file -- common/autotest_common.sh@974 -- # wait 2275251 00:46:52.005 00:46:52.005 real 0m11.716s 00:46:52.005 user 0m28.002s 00:46:52.005 sys 0m2.582s 00:46:52.005 11:27:11 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:52.005 11:27:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:52.005 ************************************ 00:46:52.005 END TEST keyring_file 00:46:52.005 ************************************ 00:46:52.005 11:27:11 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:46:52.005 11:27:11 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:52.005 11:27:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:46:52.005 11:27:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:52.005 11:27:11 -- common/autotest_common.sh@10 -- # set +x 00:46:52.005 ************************************ 00:46:52.005 START TEST keyring_linux 00:46:52.005 ************************************ 00:46:52.005 11:27:11 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:52.005 Joined session keyring: 355174352 00:46:52.265 * Looking for test storage... 00:46:52.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:52.265 11:27:12 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:52.265 11:27:12 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:46:52.265 11:27:12 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:52.265 11:27:12 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:52.265 11:27:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:52.266 11:27:12 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:52.266 11:27:12 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:52.266 --rc genhtml_branch_coverage=1 00:46:52.266 --rc genhtml_function_coverage=1 00:46:52.266 --rc genhtml_legend=1 00:46:52.266 --rc geninfo_all_blocks=1 00:46:52.266 --rc geninfo_unexecuted_blocks=1 00:46:52.266 00:46:52.266 ' 00:46:52.266 11:27:12 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:52.266 --rc genhtml_branch_coverage=1 00:46:52.266 --rc genhtml_function_coverage=1 00:46:52.266 --rc genhtml_legend=1 00:46:52.266 --rc geninfo_all_blocks=1 00:46:52.266 --rc geninfo_unexecuted_blocks=1 00:46:52.266 00:46:52.266 ' 00:46:52.266 11:27:12 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:52.266 --rc genhtml_branch_coverage=1 00:46:52.266 --rc genhtml_function_coverage=1 00:46:52.266 --rc genhtml_legend=1 00:46:52.266 --rc geninfo_all_blocks=1 00:46:52.266 --rc geninfo_unexecuted_blocks=1 00:46:52.266 00:46:52.266 ' 00:46:52.266 11:27:12 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:52.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:52.266 --rc genhtml_branch_coverage=1 00:46:52.266 --rc genhtml_function_coverage=1 00:46:52.266 --rc genhtml_legend=1 00:46:52.266 --rc geninfo_all_blocks=1 00:46:52.266 --rc geninfo_unexecuted_blocks=1 00:46:52.266 00:46:52.266 ' 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:52.266 11:27:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:52.266 11:27:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:52.266 11:27:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:52.266 11:27:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:52.266 11:27:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:52.266 11:27:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:52.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@731 -- # python - 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:52.266 /tmp/:spdk-test:key0 00:46:52.266 11:27:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:52.266 11:27:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:46:52.266 11:27:12 keyring_linux -- nvmf/common.sh@731 -- # python - 00:46:52.527 11:27:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:52.527 11:27:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:52.527 /tmp/:spdk-test:key1 00:46:52.527 11:27:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2278112 00:46:52.527 11:27:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2278112 00:46:52.527 11:27:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2278112 ']' 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:52.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:52.527 11:27:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:52.527 [2024-10-09 11:27:12.343274] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:46:52.527 [2024-10-09 11:27:12.343330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278112 ] 00:46:52.527 [2024-10-09 11:27:12.472824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:52.527 [2024-10-09 11:27:12.503606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:52.527 [2024-10-09 11:27:12.521652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:53.467 [2024-10-09 11:27:13.117566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:53.467 null0 00:46:53.467 [2024-10-09 11:27:13.149529] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:53.467 [2024-10-09 11:27:13.149935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:53.467 795427528 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:53.467 944490365 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2278303 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2278303 /var/tmp/bperf.sock 00:46:53.467 11:27:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2278303 ']' 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:53.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:53.467 11:27:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:53.467 [2024-10-09 11:27:13.225614] Starting SPDK v25.01-pre git sha1 a29d7fdf9 / DPDK 24.11.0-rc0 initialization... 00:46:53.467 [2024-10-09 11:27:13.225665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278303 ] 00:46:53.467 [2024-10-09 11:27:13.355376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:46:53.467 [2024-10-09 11:27:13.400732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.467 [2024-10-09 11:27:13.417250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:54.038 11:27:14 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:54.038 11:27:14 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:46:54.038 11:27:14 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:54.038 11:27:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:54.297 11:27:14 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:54.297 11:27:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:54.557 11:27:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:54.557 11:27:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:54.557 [2024-10-09 11:27:14.525349] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:54.818 nvme0n1 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:54.818 11:27:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:54.818 11:27:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:54.818 11:27:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.818 11:27:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:54.818 11:27:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@25 -- # sn=795427528 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 795427528 == \7\9\5\4\2\7\5\2\8 ]] 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 795427528 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:55.078 11:27:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:55.078 Running I/O for 1 seconds... 00:46:56.473 16938.00 IOPS, 66.16 MiB/s 00:46:56.473 Latency(us) 00:46:56.473 [2024-10-09T09:27:16.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.473 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:56.473 nvme0n1 : 1.01 16937.81 66.16 0.00 0.00 7525.28 4926.70 11659.85 00:46:56.473 [2024-10-09T09:27:16.475Z] =================================================================================================================== 00:46:56.473 [2024-10-09T09:27:16.475Z] Total : 16937.81 66.16 0.00 0.00 7525.28 4926.70 11659.85 00:46:56.473 { 00:46:56.473 "results": [ 00:46:56.473 { 00:46:56.473 "job": "nvme0n1", 00:46:56.473 "core_mask": "0x2", 00:46:56.473 "workload": "randread", 00:46:56.473 "status": "finished", 00:46:56.473 "queue_depth": 128, 00:46:56.473 "io_size": 4096, 00:46:56.473 "runtime": 1.007568, 00:46:56.473 "iops": 16937.814618963683, 00:46:56.473 "mibps": 66.16333835532689, 00:46:56.473 "io_failed": 0, 00:46:56.473 "io_timeout": 0, 00:46:56.473 "avg_latency_us": 7525.275446215786, 00:46:56.473 "min_latency_us": 4926.695623120615, 00:46:56.473 "max_latency_us": 11659.846308052121 00:46:56.473 } 00:46:56.473 ], 00:46:56.473 "core_count": 1 00:46:56.473 } 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:56.473 11:27:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:56.473 11:27:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:56.473 11:27:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:46:56.473 11:27:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:46:56.474 11:27:16 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.474 11:27:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.734 [2024-10-09 11:27:16.621654] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:56.734 [2024-10-09 11:27:16.622580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8ed50 (107): Transport endpoint is not connected 00:46:56.734 [2024-10-09 11:27:16.623574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8ed50 (9): Bad file descriptor 00:46:56.734 [2024-10-09 11:27:16.624573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:46:56.734 [2024-10-09 11:27:16.624581] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:56.734 [2024-10-09 11:27:16.624587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:56.734 [2024-10-09 11:27:16.624593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:46:56.734 request: 00:46:56.734 { 00:46:56.734 "name": "nvme0", 00:46:56.734 "trtype": "tcp", 00:46:56.734 "traddr": "127.0.0.1", 00:46:56.734 "adrfam": "ipv4", 00:46:56.734 "trsvcid": "4420", 00:46:56.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.734 "prchk_reftag": false, 00:46:56.734 "prchk_guard": false, 00:46:56.734 "hdgst": false, 00:46:56.734 "ddgst": false, 00:46:56.734 "psk": ":spdk-test:key1", 00:46:56.734 "allow_unrecognized_csi": false, 00:46:56.734 "method": "bdev_nvme_attach_controller", 00:46:56.734 "req_id": 1 00:46:56.734 } 00:46:56.734 Got JSON-RPC error response 00:46:56.734 response: 00:46:56.734 { 00:46:56.734 "code": -5, 00:46:56.734 "message": "Input/output error" 00:46:56.734 } 00:46:56.734 11:27:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:46:56.734 11:27:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:46:56.734 11:27:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:46:56.734 11:27:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@33 -- # sn=795427528 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 795427528 00:46:56.734 1 links removed 00:46:56.734 11:27:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@33 -- # sn=944490365 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 944490365 00:46:56.735 1 links removed 00:46:56.735 11:27:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2278303 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2278303 ']' 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2278303 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2278303 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2278303' 00:46:56.735 killing process with pid 2278303 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 2278303 00:46:56.735 Received shutdown signal, test time was about 1.000000 seconds 00:46:56.735 00:46:56.735 Latency(us) 00:46:56.735 [2024-10-09T09:27:16.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.735 [2024-10-09T09:27:16.737Z] =================================================================================================================== 00:46:56.735 [2024-10-09T09:27:16.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:56.735 11:27:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 2278303 00:46:56.995 11:27:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2278112 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2278112 ']' 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2278112 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2278112 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2278112' 00:46:56.995 killing process with pid 2278112 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 2278112 00:46:56.995 11:27:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 2278112 00:46:57.255 00:46:57.255 real 0m5.116s 00:46:57.255 user 0m9.314s 00:46:57.255 sys 0m1.387s 00:46:57.255 11:27:17 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:57.255 11:27:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:57.255 ************************************ 00:46:57.255 END TEST keyring_linux 00:46:57.255 ************************************ 00:46:57.255 11:27:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:57.255 11:27:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:46:57.255 11:27:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:57.255 11:27:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:57.255 11:27:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:46:57.255 11:27:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:46:57.255 11:27:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:46:57.255 11:27:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:57.255 11:27:17 -- common/autotest_common.sh@10 -- # set +x 00:46:57.255 11:27:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:46:57.255 11:27:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:46:57.255 11:27:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:46:57.255 11:27:17 -- common/autotest_common.sh@10 -- # set +x 00:47:05.387 INFO: APP EXITING 00:47:05.387 INFO: killing all VMs 00:47:05.387 INFO: killing vhost app 00:47:05.387 INFO: EXIT DONE 00:47:07.927 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:07.927 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:07.927 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:12.131 Cleaning 00:47:12.131 Removing: /var/run/dpdk/spdk0/config 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:12.131 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:12.131 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:12.131 Removing: /var/run/dpdk/spdk1/config 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:12.131 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:12.131 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:12.131 Removing: /var/run/dpdk/spdk2/config 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:12.131 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:12.131 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:12.131 Removing: /var/run/dpdk/spdk3/config 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:12.131 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:12.131 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:12.131 Removing: /var/run/dpdk/spdk4/config 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:12.131 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:12.131 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:12.131 Removing: /dev/shm/bdev_svc_trace.1 00:47:12.131 Removing: /dev/shm/nvmf_trace.0 00:47:12.131 Removing: /dev/shm/spdk_tgt_trace.pid1594425 00:47:12.131 Removing: /var/run/dpdk/spdk0 00:47:12.131 Removing: /var/run/dpdk/spdk1 00:47:12.131 Removing: /var/run/dpdk/spdk2 00:47:12.131 Removing: /var/run/dpdk/spdk3 00:47:12.131 Removing: /var/run/dpdk/spdk4 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1592931 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1594425 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1595267 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1596316 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1596652 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1597717 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1597872 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1598194 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1599439 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1600227 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1600843 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1601456 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1601834 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1602152 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1602337 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1602672 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1603060 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1604207 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1607716 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1608090 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1608460 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1608730 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1609163 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1609290 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1609870 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1609888 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1610253 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1610535 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1610624 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1610957 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1611407 00:47:12.131 Removing: /var/run/dpdk/spdk_pid1611762 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1612162 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1616774 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1622220 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1634462 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1635351 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1640524 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1640992 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1646321 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1653989 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1657115 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1669782 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1680968 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1682985 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1684214 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1705327 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1710895 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1811378 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1817830 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1825062 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1832451 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1832576 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1833629 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1834682 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1835690 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1836363 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1836373 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1836705 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1836718 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1836806 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1837893 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1838951 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1840047 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1840723 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1840727 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1841066 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1842565 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1843995 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1854513 00:47:12.132 Removing: /var/run/dpdk/spdk_pid1890593 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1896148 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1898113 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1900174 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1900518 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1900856 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1901177 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1901918 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1904039 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1905344 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1905895 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1908457 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1909186 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1910143 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1915090 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1921794 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1921796 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1921797 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1926580 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1931523 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1937803 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1982299 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1987115 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1994436 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1996005 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1997743 00:47:12.392 Removing: /var/run/dpdk/spdk_pid1999376 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2005081 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2010206 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2019425 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2019429 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2024540 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2024877 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2025203 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2025659 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2025836 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2027594 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2029649 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2031531 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2033470 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2035468 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2037465 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2044903 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2045728 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2046925 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2048101 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2054439 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2057691 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2064142 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2071122 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2081607 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2090383 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2090385 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2113836 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2114681 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2115465 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2116166 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2117189 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2117911 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2118596 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2119301 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2125079 00:47:12.392 Removing: /var/run/dpdk/spdk_pid2125351 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2132551 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2132766 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2139380 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2144577 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2156124 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2156795 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2161919 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2162262 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2167366 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2174253 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2177789 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2190072 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2200598 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2202584 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2203590 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2223325 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2228135 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2231872 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2239388 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2239400 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2245355 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2247554 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2250062 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2251254 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2253777 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2255095 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2265052 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2265724 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2266388 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2269353 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2269840 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2270373 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2275251 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2275391 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2277599 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2278112 00:47:12.653 Removing: /var/run/dpdk/spdk_pid2278303 00:47:12.653 Clean 00:47:12.653 11:27:32 -- common/autotest_common.sh@1451 -- # return 0 00:47:12.653 11:27:32 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:47:12.653 11:27:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:12.653 11:27:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.914 11:27:32 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:47:12.914 11:27:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:12.914 11:27:32 -- common/autotest_common.sh@10 -- # set +x 00:47:12.914 11:27:32 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:12.914 11:27:32 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:12.914 11:27:32 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:12.914 11:27:32 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:47:12.914 11:27:32 -- spdk/autotest.sh@394 -- # hostname 00:47:12.914 11:27:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:13.175 geninfo: WARNING: invalid characters removed from testname! 00:47:39.744 11:27:58 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:41.127 11:28:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:43.666 11:28:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:45.575 11:28:05 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:46.955 11:28:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:48.864 11:28:08 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:50.244 11:28:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:50.505 11:28:10 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:47:50.505 11:28:10 -- common/autotest_common.sh@1691 -- $ lcov --version 00:47:50.505 11:28:10 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:47:50.505 11:28:10 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:47:50.505 11:28:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:47:50.505 11:28:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:47:50.505 11:28:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:47:50.505 11:28:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:47:50.505 11:28:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:47:50.505 11:28:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:47:50.505 11:28:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:47:50.505 11:28:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:47:50.505 11:28:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:47:50.505 11:28:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:47:50.505 11:28:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:47:50.505 11:28:10 -- scripts/common.sh@344 -- $ case "$op" in 00:47:50.505 11:28:10 -- scripts/common.sh@345 -- $ : 1 00:47:50.505 11:28:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:47:50.505 11:28:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:50.505 11:28:10 -- scripts/common.sh@365 -- $ decimal 1 00:47:50.505 11:28:10 -- scripts/common.sh@353 -- $ local d=1 00:47:50.505 11:28:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:47:50.505 11:28:10 -- scripts/common.sh@355 -- $ echo 1 00:47:50.505 11:28:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:47:50.505 11:28:10 -- scripts/common.sh@366 -- $ decimal 2 00:47:50.505 11:28:10 -- scripts/common.sh@353 -- $ local d=2 00:47:50.505 11:28:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:47:50.505 11:28:10 -- scripts/common.sh@355 -- $ echo 2 00:47:50.505 11:28:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:47:50.505 11:28:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:47:50.505 11:28:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:47:50.505 11:28:10 -- scripts/common.sh@368 -- $ return 0 00:47:50.505 11:28:10 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:50.505 11:28:10 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:47:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.505 --rc genhtml_branch_coverage=1 00:47:50.505 --rc genhtml_function_coverage=1 00:47:50.505 --rc genhtml_legend=1 00:47:50.505 --rc geninfo_all_blocks=1 00:47:50.505 --rc geninfo_unexecuted_blocks=1 00:47:50.505 00:47:50.505 ' 00:47:50.505 11:28:10 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:47:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.505 --rc genhtml_branch_coverage=1 00:47:50.505 --rc genhtml_function_coverage=1 00:47:50.505 --rc genhtml_legend=1 00:47:50.505 --rc geninfo_all_blocks=1 00:47:50.505 --rc geninfo_unexecuted_blocks=1 00:47:50.505 00:47:50.505 ' 00:47:50.505 11:28:10 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:47:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.505 --rc genhtml_branch_coverage=1 00:47:50.505 --rc genhtml_function_coverage=1 00:47:50.505 --rc genhtml_legend=1 00:47:50.505 --rc geninfo_all_blocks=1 00:47:50.505 --rc geninfo_unexecuted_blocks=1 00:47:50.505 00:47:50.505 ' 00:47:50.505 11:28:10 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:47:50.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.505 --rc genhtml_branch_coverage=1 00:47:50.505 --rc genhtml_function_coverage=1 00:47:50.505 --rc genhtml_legend=1 00:47:50.505 --rc geninfo_all_blocks=1 00:47:50.505 --rc geninfo_unexecuted_blocks=1 00:47:50.505 00:47:50.505 ' 00:47:50.505 11:28:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:50.505 11:28:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:47:50.505 11:28:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:47:50.505 11:28:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:50.505 11:28:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:50.505 11:28:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:50.505 11:28:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:50.505 11:28:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:50.505 11:28:10 -- paths/export.sh@5 -- $ export PATH 00:47:50.505 11:28:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:50.505 11:28:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:47:50.505 11:28:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:47:50.505 11:28:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728466090.XXXXXX 00:47:50.505 11:28:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728466090.SRLOiy 00:47:50.505 11:28:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:47:50.505 11:28:10 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:47:50.505 11:28:10 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:47:50.505 11:28:10 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:47:50.505 11:28:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:47:50.505 11:28:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:47:50.505 11:28:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:47:50.506 11:28:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:47:50.506 11:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:47:50.506 11:28:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:47:50.506 11:28:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:47:50.506 11:28:10 -- pm/common@17 -- $ local monitor 00:47:50.506 11:28:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:50.506 11:28:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:50.506 11:28:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:50.506 11:28:10 -- pm/common@21 -- $ date +%s 00:47:50.506 11:28:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:50.506 11:28:10 -- pm/common@21 -- $ date +%s 00:47:50.506 11:28:10 -- pm/common@25 -- $ sleep 1 00:47:50.506 11:28:10 -- pm/common@21 -- $ date +%s 00:47:50.506 11:28:10 -- pm/common@21 -- $ date +%s 00:47:50.506 11:28:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728466090 00:47:50.506 11:28:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728466090 00:47:50.506 11:28:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728466090 00:47:50.506 11:28:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728466090 00:47:50.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728466090_collect-vmstat.pm.log 00:47:50.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728466090_collect-cpu-load.pm.log 00:47:50.766 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728466090_collect-cpu-temp.pm.log 00:47:50.766 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728466090_collect-bmc-pm.bmc.pm.log 00:47:51.708 11:28:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:47:51.708 11:28:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:47:51.708 11:28:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:47:51.708 11:28:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:51.708 11:28:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:51.708 11:28:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:51.708 11:28:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:47:51.708 11:28:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:47:51.708 11:28:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:47:51.708 11:28:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:51.708 11:28:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:47:51.708 11:28:11 -- pm/common@44 -- $ pid=2292104 00:47:51.708 11:28:11 -- pm/common@50 -- $ kill -TERM 2292104 00:47:51.708 11:28:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:51.708 11:28:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:47:51.708 11:28:11 -- pm/common@44 -- $ pid=2292105 00:47:51.708 11:28:11 -- pm/common@50 -- $ kill -TERM 2292105 00:47:51.708 11:28:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:51.708 11:28:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:47:51.708 11:28:11 -- pm/common@44 -- $ pid=2292107 00:47:51.708 11:28:11 -- pm/common@50 -- $ kill -TERM 2292107 00:47:51.708 11:28:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:47:51.708 11:28:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:47:51.708 11:28:11 -- pm/common@44 -- $ pid=2292136 00:47:51.708 11:28:11 -- pm/common@50 -- $ sudo -E kill -TERM 2292136 00:47:51.708 + [[ -n 1490832 ]] 00:47:51.708 + sudo kill 1490832 00:47:51.718 [Pipeline] } 00:47:51.729 [Pipeline] // stage 00:47:51.734 [Pipeline] } 00:47:51.747 [Pipeline] // timeout 00:47:51.752 [Pipeline] } 00:47:51.766 [Pipeline] // catchError 00:47:51.771 [Pipeline] } 00:47:51.784 [Pipeline] // wrap 00:47:51.790 [Pipeline] } 00:47:51.802 [Pipeline] // catchError 00:47:51.810 [Pipeline] stage 00:47:51.812 [Pipeline] { (Epilogue) 00:47:51.824 [Pipeline] catchError 00:47:51.826 [Pipeline] { 00:47:51.838 [Pipeline] echo 00:47:51.840 Cleanup processes 00:47:51.846 [Pipeline] sh 00:47:52.201 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:52.201 2292259 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:47:52.201 2292801 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:52.239 [Pipeline] sh 00:47:52.542 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:52.542 ++ grep -v 'sudo pgrep' 00:47:52.542 ++ awk '{print $1}' 00:47:52.542 + sudo kill -9 2292259 00:47:52.554 [Pipeline] sh 00:47:52.842 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:05.088 [Pipeline] sh 00:48:05.378 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:05.378 Artifacts sizes are good 00:48:05.395 [Pipeline] archiveArtifacts 00:48:05.402 Archiving artifacts 00:48:05.572 [Pipeline] sh 00:48:05.859 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:05.874 [Pipeline] cleanWs 00:48:05.885 [WS-CLEANUP] Deleting project workspace... 00:48:05.885 [WS-CLEANUP] Deferred wipeout is used... 00:48:05.892 [WS-CLEANUP] done 00:48:05.894 [Pipeline] } 00:48:05.911 [Pipeline] // catchError 00:48:05.922 [Pipeline] sh 00:48:06.217 + logger -p user.info -t JENKINS-CI 00:48:06.227 [Pipeline] } 00:48:06.240 [Pipeline] // stage 00:48:06.245 [Pipeline] } 00:48:06.258 [Pipeline] // node 00:48:06.263 [Pipeline] End of Pipeline 00:48:06.295 Finished: SUCCESS